Subscribe to our free newsletter

To make sure you won't miss any valuable content we share with our community.

Getting Started with Nginx on Linux: a Complete Tutorial

Nginx pronounced Engine-Ex, is a popular and open-source, lightweight, and high-performance web server software that also acts as a reverse proxy, load balancer, mail proxy, and HTTP cache. Nginx is easy to configure in order to serve static web content or to act as a proxy server.It can be deployed to also serve dynamic content on the network using FastCGI, SCGI handlers for scripts, WSGI application servers or Phusion Passenger modules, and it can serve as a software load balancer. Nginx uses an asynchronous event-driven approach, rather than threads, to handle requests. Nginx’s modular event-driven architecture can provide predictable performance under high loads.
In this tutorial, we are going to get started with Nginx on Linux and use the terminal commands to install and configure a test on it. You will get familiar with all the codes and commands for setting Nginx up and running on your operating system.

What you need to get started:

1. This tutorial is based on Linux. If you are working with Ubuntu 20.04 Linux or Linux Mint, or any other OS of the Linux family, you have a suitable operating system for the following tutorial.
2. A user account with sudo or root privileges.
3. Access to a terminal window/command line

Getting Started with Nginx

1. Installation

First off, you need to update software repositories. This helps make sure that the latest updates and patches are installed. Open a terminal window and enter the following: sudo apt-get update Now, to install Nginx from Ubuntu repository, enter the following command in the terminal: sudo apt-get install nginx If you are on Fedora, you should instead enter this command to install Nginx. sudo dnf install nginx And if you are on CentOS or RHEL, the installation is done using this command: sudo yum install epel-release && yum install nginx finally, we test the installation success by entering: nginx -v If the installation has been successful, You should get a result like this: nginx version: nginx/1.18.0 (Ubuntu)

2. Controlling the Nginx Service

Next, we should get familiar with the controlling commands. Using these commands, you will be able to start, enable, stop and disable the Nginx. First off, we should check the status of Nginx service. To do so, you can use the following command: sudo systemctl status nginx And you can see the result:

As you can it is activated and up and running. If it is not activated, you can first start by entering this command in the terminal: sudo systemctl start nginx And then, you will be able to enable it using the following command: sudo systemctl enable nginx If you want to stop the Nginx web service, you can first stop it: sudo systemctl stop nginx And then disable it: sudo systemctl disable nginx Also, if you want to reload the Nginx web service, you can use the following command: sudo systemctl reload nginx And for a hard restart, there is a command as below: sudo systemctl restart nginx

3. UnComplicated Firewall Commands:

Nginx needs access through the system’s firewall. To do this, Nginx installs a set of profiles for the Ubuntu default ufw (Uncomplicated Firewall). To display the available Nginx profiles use this command: sudo ufw app list And you can see the result. Neglect the results other than that of Nginx.

To get Nginx access through the default Ubuntu firewall, enter the following: sudo ufw allow 'nginx http' Then you need to refresh the firewall settings by entering: sudo ufw reload For https traffic, enter: sudo ufw allow 'nginx https' And for both you can use: sudo ufw allow 'nginx full'

4. Running a Test

To begin running a test, you should first make sure that the Nginx service is running, by checking the status as mentioned earlier. Open a web browser, and enter the following web address: http://127.0.0.1 And you should be able to see the following result containing a page with a welcome statement.

Now, if the system does not have a graphical interface, the Nginx Welcome page can be loaded in the terminal using curl: sudo apt-get install curl By entering the following command, you should be able to see the Welcome page contents in the terminal: curl 127.0.0.1 And the result is as expected:

5. Configuring a Server Block

In Nginx, a server block is a configuration that works as its own server. By default, Nginx has one server block preconfigured. It is located at /var/www/html. However, it can be configured with multiple server blocks for different sites.
Note that in this tutorial, we will use test_domain.com for the domain name. This may be replaced with your own domain name.
In a terminal, create a new directory by entering the following command: sudo mkdir -p /var/www/test_domain.com/html Use chmod to configure ownership and permission rules: sudo chown –R $USER:$USER /var/www/test_domain.com
sudo chmod –R 755 /var/www/test_domain.com
Open index.html for editing in a text editor of your choice (we will use the Nano text editor): sudo nano /var/www/test_domain.com/html/index.html You will see the HTML code like below in it. You edit it if you like, but we will keep it this way.

Press CTRL+o to write the changes, then CTRL+x to exit.
Now, open the configuration file for editing: sudo nano /etc/nginx/sites-available/test_domain.com Enter the following code in it: server {
listen 80;
root /var/www/test_domain.com/html;
index index.html index.htm index.nginx.debian.html;
server_name test_domain.com www.test_domain.com;
location / {
try_files $uri $uri/ =404;
}
}
So that you have the following result:

Press CTRL+o to write the changes, then CTRL+x to exit.
Next, create a symbolic link between the server block and the startup directory by entering the following: sudo ln –s /etc/nginx/sites-available/test_domain.com /etc/nginx/sites-enabled Afterward, you should restart Nginx by running the following command: sudo systemctl restart nginx Then, open /etc/hosts for editing: sudo nano /etc/hosts You will see the following result:

Next, enter this command after the last line: 127.0.1.1 test_domain.com www.test_domain.com So that it becomes like this:

Now if you open a browser window and navigate to test_domain.com (or the domain name you configured in Nginx). You should see the message you entered in HTML file you opened with nano. Notice that there were already an HTML script in there and we didn’t change it. But anyway, if you have changed the HTML file, you will see the result of your edition different from ours:


Conclusion

In this tutorial, we have provided the guidelines for installing, starting, and setting Nginx up and running on Linux. Also we tested the configuration and in the end, we configured an Nginx server block. We hope you enjoyed this quick Nginx configuration tutorial and it has been helpful for you.

Download this Article in PDF format

3d websites

Arashtad Custom Services

In Arashtad, we have gathered a professional team of developers who are working in fields such as 3D websites, 3D games, metaverses, and other types of WebGL and 3D applications as well as blockchain developemnet.

Arashtad Serivces
Drop us a message and tell us about your ideas.
Fill in the Form
Blockchain Development

Apache Vs NGINX – Which Is The Best Web Server for You?

If you are looking for open-source web servers, Apache and Nginx are the two most popular ones. Together, they are responsible for serving over 50% of traffic on the internet. Both solutions are capable of handling diverse workloads and working with other software to provide a complete web stack. While Apache and Nginx share many qualities, they should not be thought of as entirely interchangeable. Each excels in its own way, and this article will cover the strengths and weaknesses of each. In this article, before we dive into the differences between Apache and Nginx, we will take a quick look at the background of these two projects and their general characteristics.

Apache Background

The Apache HTTP Server was created by Robert McCool in 1995 and has been developed under the direction of the Apache Software Foundation since 1999. Since the HTTP web server is the foundation’s original project and is by far their most popular piece of software, it is often referred to simply as “Apache”.
The Apache web server was the most popular server on the internet from at least 1996 through 2016. Because of this popularity, Apache benefits from great documentation and integrated support from other software projects.
Apache is often chosen by administrators for its flexibility, power, and near-universal support. It is extensible through a dynamically loadable module system and can directly serve many scripting languages, such as PHP, without requiring additional software.

Nginx Background

In 2002, Igor Sysoev began work on Nginx as an answer to the C10K problem, which was an outstanding challenge for web servers to be able to handle ten thousand concurrent connections. Nginx was publicly released in 2004, and met this goal by relying on an asynchronous, events-driven architecture.
Nginx has since surpassed Apache in popularity due to its lightweight footprint and its ability to scale easily on minimal hardware. Nginx excels at serving static content quickly, has its own robust module system, and can proxy dynamic requests off to other software as needed.
Nginx is often selected by administrators for its resource efficiency and responsiveness under load, as well as its straightforward configuration syntax.

Nginx and Apache differences

There are a number of differences between Nginx and Apache that we should know in order to choose them more wisely for our needs. The differences are in the following aspects:

1. Differences in Architecture
2. Differences in Static and Dynamic Content
3. Differences in the System Configuration
4. Differences in Interpretation

Differences in Configuration

Apache and Nginx differ significantly in their approach to allowing overrides on a per-directory basis.

Apache

Apache includes an option to allow additional configuration on a per-directory basis by inspecting and interpreting directives in hidden files within the content directories themselves. These files are known as .htaccess files.
Since these files reside within the content directories themselves, when handling a request, Apache checks each component of the path to the requested file for an .htaccess file and applies the directives found within. This effectively allows decentralized configuration of the web server, which is often used for implementing URL rewrites, access restrictions, authorization and authentication, even caching policies.
While the above examples can all be configured in the main Apache configuration file, .htaccess files have some important advantages. First, since these are interpreted each time they are found along a request path, they are implemented immediately without reloading the server. Second, it makes it possible to allow non-privileged users to control certain aspects of their own web content without giving them control over the entire configuration file.
This provides an easy way for certain web software, like content management systems, to configure their environment without providing access to the central configuration file. This is also used by shared hosting providers to retain control of the main configuration while giving clients control over their specific directories.

Nginx

Nginx does not interpret .htaccess files, nor does it provide any mechanism for evaluating per-directory configuration outside of the main configuration file. Apache was originally developed at a time when it was advantageous to run many heterogeneous web deployments side-by-side on a single server, and delegating permissions made sense. Nginx was developed at a time when individual deployments were more likely to be containerized and to ship with their own network configurations, minimizing this need. This may be less flexible in some circumstances than the Apache model, but it does have its own advantages.
The most notable improvement over the .htaccess system of directory-level configuration is increased performance. For a typical Apache setup that may allow .htaccess in any directory, the server will check for these files in each of the parent directories leading up to the requested file, for each request. If one or more .htaccess files are found during this search, they must be read and interpreted. By not allowing directory overrides, Nginx can serve requests faster by doing a single directory lookup and file read for each request (assuming that the file is found in the conventional directory structure).
Another advantage is security related. Distributing directory-level configuration access also distributes the responsibility of security to individual users, who may not be trusted to handle this task well. Keep in mind that it is possible to turn off .htaccess interpretation in Apache if these concerns resonate with you.

Differences in Interpretation

How the web server interprets requests and maps them to actual resources on the system is another area where these two servers differ.

Apache

Apache provides the ability to interpret a request as a physical resource on the filesystem or as a URI location that may need a more abstract evaluation. In general, for the former Apache uses or blocks, while it utilizes blocks for more abstract resources.
Because Apache was designed from the ground up as a web server, the default is usually to interpret requests as filesystem resources. It begins by taking the document root and appending the portion of the request following the host and port number to try to find an actual file. Essentially, the filesystem hierarchy is represented on the web as the available document tree.
Apache provides a number of alternatives for when the request does not match the underlying filesystem. For instance, an Alias directive can be used to map to an alternative location. Using blocks is a method of working with the URI itself instead of the filesystem. There are also regular expression variants that can be used to apply configuration more flexibly throughout the filesystem.
While Apache has the ability to operate on both the underlying filesystem and other web URIs, it leans heavily towards filesystem methods. This can be seen in some of the design decisions, including the use of .htaccess files for per-directory configuration. The Apache docs themselves warn against using URI-based blocks to restrict access when the request mirrors the underlying filesystem.

Nginx

Nginx was created to be both a web server and a proxy server. Due to the architecture required for these two roles, it works primarily with URIs, translating to the filesystem when necessary.
This is evident in the way that Nginx configuration files are constructed and interpreted. Nginx does not provide a mechanism for specifying the configuration for a filesystem directory and instead parses the URI itself.
For instance, the primary configuration blocks for Nginx are server and location blocks. The server block interprets the host being requested, while the location blocks are responsible for matching portions of the URI that comes after the host and port. At this point, the request is being interpreted as a URI, not as a location on the filesystem.
For static files, all requests eventually have to be mapped to a location on the filesystem. First, Nginx selects the server and location blocks that will handle the request and then combines the document root with the URI, adapting anything necessary according to the configuration specified.
This may seem similar, but parsing requests primarily as URIs instead of filesystem locations allows Nginx to more easily function in both web, mail, and proxy server roles. Nginx is configured by laying out how to respond to different request patterns. Nginx does not check the filesystem until it is ready to serve the request, which explains why it does not implement a form of .htaccess files.

Differences in Connection Handling Architecture

One difference between Apache and Nginx is the specific way that they handle connections and network traffic. This is perhaps the most significant difference in the way that they respond under load.

Apache

Apache provides a variety of multi-processing modules (Apache calls these MPMs) that dictate how client requests are handled. This allows administrators to configure its connection handling architecture. These are:
1. mpm_prefork: This processing module spawns processes with a single thread each to handle requests. Each child can handle a single connection at a time. As long as the number of requests is fewer than the number of processes, this MPM is very fast. However, performance degrades quickly after the requests surpass the number of processes, so this is not a good choice in many scenarios. Each process has a significant impact on RAM consumption, so this MPM is difficult to scale effectively. This may still be a good choice though if used in conjunction with other components that are not built with threads in mind. For instance, PHP is not always thread-safe, so this MPM has been recommended as a safe way of working with mod_php, the Apache module for processing these files.
2. mpm_worker: This module spawns processes that can each manage multiple threads. Each of these threads can handle a single connection. Threads are much more efficient than processes, which means that this MPM scales better than the prefork MPM. Since there are more threads than processes, this also means that new connections can immediately take a free thread instead of having to wait for a free process.
3. mpm_event: This module is similar to the worker module in most situations but is optimized to handle keep-alive connections. When using the worker MPM, a connection will hold a thread regardless of whether a request is actively being made for as long as the connection is kept alive. The event MPM handles keep-alive connections by setting aside dedicated threads for handling keep-alive connections and passing active requests off to other threads. This keeps the module from getting bogged down by keep-alive requests, allowing for faster execution.
Apache provides a flexible architecture for choosing different connection and request handling algorithms. The choices provided are mainly a function of the server’s evolution and the increasing need for concurrency as the internet landscape has changed.

Nginx

Nginx came onto the scene after Apache, with more awareness of the concurrency problems that sites face at scale. As a result, Nginx was designed from the ground up to use an asynchronous, non-blocking, event-driven connection handling algorithm.
Nginx spawns worker processes, each of which can handle thousands of connections. The worker processes accomplish this by implementing a fast looping mechanism that continuously checks for and processes events. Decoupling actual work from connections allows each worker to concern itself with a connection only when a new event has been triggered.
Each of the connections handled by the worker are placed within the event loop. Within the loop, events are processed asynchronously, allowing work to be handled in a non-blocking manner. When a connection closes, it is removed from the loop.
This style of connection processing allows Nginx to scale with limited resources. Since the server is single-threaded and processes are not spawned to handle each new connection, the memory and CPU usage tends to stay relatively consistent, even at times of heavy load.

Differences in Static and Dynamic Contents

In terms of real-world use-cases, one of the most common comparisons between Apache and Nginx is the way in which each server handles requests for static and dynamic content.

Apache

Apache servers can handle static content using its conventional file-based methods. The performance of these operations is mainly a function of the MPM methods described above.
Apache can also process dynamic content by embedding a processor of the language in question into each of its worker instances. This allows it to execute dynamic content within the web server itself without having to rely on external components. These dynamic processors can be enabled through the use of dynamically loadable modules.
Apache’s ability to handle dynamic content internally was a direct contributor to the popularity of LAMP (Linux-Apache-MySQL-PHP) architectures, as PHP code can be executed natively by the web server itself.

Nginx

Nginx does not have any ability to process dynamic content natively. To handle PHP and other requests for dynamic content, Nginx has to hand off a request to an external library for execution and wait for output to be returned. The results can then be relayed to the client.
These requests must be exchanged by Nginx and the external library using one of the protocols that Nginx knows how to speak (http, FastCGI, SCGI, uWSGI, memcache). In practice, PHP-FPM, a FastCGI implementation, is usually a drop-in solution, but Nginx is not as closely coupled with any particular language in practice.
However, this method has some advantages as well. Since the dynamic interpreter is not embedded in the worker process, its overhead will only be present for dynamic content. Static content can be served in a straight-forward manner and the interpreter will only be contacted when needed.

Using Both Servers Together:

After reviewing the benefits and limitations of both Apache and Nginx, you may have a better idea of which server is more suited to your needs. In some cases, it is possible to leverage each server’s strengths by using them together.
The conventional configuration for this partnership is to place Nginx in front of Apache as a reverse proxy. This will allow Nginx to handle all client requests. This takes advantage of Nginx’s fast processing speed and ability to handle large numbers of connections concurrently.
For static content, which Nginx excels at, files or other directives will be served quickly and directly to the client. For dynamic content, for instance PHP files, Nginx will proxy the request to Apache, which can then process the results and return the rendered page. Nginx can then pass the content back to the client.
This setup works well for many people because it allows Nginx to function as a sorting machine. It will handle all requests it can and pass on the ones that it has no native ability to serve. By cutting down on the requests the Apache server is asked to handle, we can alleviate some of the blocking that occurs when an Apache process or thread is occupied.
This configuration also facilitates horizontal scaling by adding additional backend servers as necessary. Nginx can be configured to pass requests to multiple servers, increasing this configuration’s performance.

Conclusion

In this article, you got familiar with both Nginx and Apache in terms of their background, architecture, configuration (centralized or distributed), interpretation, and static and dynamic contents. You also learned about the details of using the two kinds of web servers together.
Both Apache and Nginx are powerful, flexible, and capable. Deciding which server is best for you is largely a function of evaluating your specific requirements and testing with the patterns that you expect to see.
There are differences between these projects that have a very real impact on the raw performance, capabilities, and implementation time necessary to use either solution in production. Use the solution that best aligns with your objectives.

Download this Article in PDF format

3d websites

Arashtad Custom Services

In Arashtad, we have gathered a professional team of developers who are working in fields such as 3D websites, 3D games, metaverses, and other types of WebGL and 3D applications as well as blockchain developemnet.

Arashtad Serivces
Drop us a message and tell us about your ideas.
Fill in the Form
Blockchain Development

What is a Reverse Proxy? Introduction to Proxy servers

A reverse proxy is a server that is placed on the server side and directs the requests from the clients to the main servers. In large databases, we usually have multiple servers that need a kind of management to direct the coming requests to the appropriate server that contains the proper data for the client’s request. Reverse proxies are typically implemented to help increase security, performance, and reliability. In this article, we will first see what a proxy server is and then take a look at the different kinds of proxy servers. Afterward, we will focus on the reverse proxy, its pros, and cons, and see when and where we need to use the proxy servers. If you are new to IT and want to learn more about web servers and different articles on these subjects, feel free to take a look at our blog and read the different documents that we have provided about web servers.

What is a Proxy server?

Oftentimes, when we talk about a proxy server, we are referring to the forward proxy server. A forward proxy also known as a web proxy is placed in front of a group of a client machines. When those computers make requests to sites and services on the Internet, the proxy server intercepts those requests and then communicates with web servers on behalf of those clients, like a middleman. Below, you can see the forward proxy in comparison with the reverse proxy.

In standard Internet communication, the client computer would reach out directly to the origin server, meaning that the client sends requests to the origin server and the origin server responds to the client. When a forward proxy is in place, The client will instead send requests to it ( the forward proxy), which will then forward the request to the origin server. The origin server will then send a response to the forward proxy, which will forward the response back to the client’s computer. Why would anyone add this extra middleman to their Internet activity? There are a few reasons one might want to use a forward proxy: 1. To avoid governmental or institutional browsing restrictions: Some governments, schools and other organizations use firewalls to give their users access to a limited the version of the Internet. A forward proxy can be used to get around these restrictions, as they let the user connect to the proxy rather than directly to the sites they are visiting. 2. For filtering certain contents: Conversely, there are circumstances in which proxies are set up to block a group of users from accessing to content or sites. For example, a school network might be configured to connect to the web through a proxy which enables content filtering rules, refusing to forward responses from Facebook and other social media sites. 3. To be anonymous online: There are times when users want to protect their identity. In some cases, regular Internet users simply desire increased anonymity online, but in other cases, Internet users live in places where the government can impose serious consequences on political dissidents. Criticizing the government in a web forum or on social media can lead to fines or imprisonment for these users. If one of these dissidents uses a forward proxy to connect to a website where they post politically sensitive comments, the IP address used to post the comments will be harder to trace back to the dissident. Only the IP address of the proxy server will be visible.

How does a reverse proxy work?

As mentioned earlier at the beginning of the article, the reverse proxy server is placed on the server side and intercepts the requests from the clients. This is different from a forward proxy, where the proxy sits in front of the clients. With a reverse proxy, when clients send requests to the origin server of a website, those requests are intercepted at the network edge by the reverse proxy server. The reverse proxy server will then send requests to and receive responses from the origin server. If we want to compare the forward proxy server with the reverse proxy server, we should put it this way; a forward proxy sits in front of a client and ensures that no origin server ever communicates directly with that specific client. On the other hand, a reverse proxy sits in front of an origin server and ensures that no client ever communicates directly with that origin server.

Why do we use Reverse Proxy?

There are multiple reasons why we use the reverse proxy server: 1. Load balancing and Global Server Load Balancing (GSLB): A website with millions of users every day from all around the globe in a certain country, may not be able to handle all of its incoming site traffic with a single origin server. Instead, the site can be distributed among a pool of different servers, all handling requests for the same site. In this case, a reverse proxy can provide a load balancing solution that will distribute the incoming traffic evenly among the different servers to prevent any single server from becoming overloaded. In the event that a server fails completely, other servers can step up to handle the traffic. 2. Security: With a reverse proxy in front of the origin servers, a website or a web service will never have to reveal the IP address of its origin web servers and this can protect it from the attackers who want to leverage a targeted attack against them, such as a DDoS attack. Instead, the attackers will only be able to target the reverse proxy, such as Cloudflare’s CDN, which will have tighter security and more resources to fend off a cyber attack. 3. SSL encryption: Encrypting and decrypting SSL (Secure Sockets Layer) and its successor TLS (Transport Layer Security) communications for each client could be computationally inefficient and cost-effective if done with an origin server. A reverse proxy can be configured to decrypt all incoming requests and encrypt all outgoing responses, freeing up valuable resources on the origin server. 4. caching: Caching is the process of storing copies of files in a cache, or temporary storage location so that they can be accessed more quickly. A reverse proxy can also cache content, resulting in faster performance. For instance, if a user in London visits a reverse-proxied website with web servers in Silicon Valley, the user might actually connect to a local reverse proxy server in London, which will then have to communicate with an origin server in Silicon valley. The proxy server can then cache (or temporarily save) the response data. Subsequent London users who browse the site will then get the locally cached version from the London reverse proxy server, resulting in much faster performance.

Conclusion

In this article, we have got familiar with the two types of proxy servers including the forward and reverse proxy servers. The proxy servers are mostly referred to the forward type. However, reverse proxy servers are also very common these days considering that we have so many websites with many users and high traffic. Not to mention that so many important web services and websites need a security layer for encryption and decryption. Moreover, caching makes the use of the origin servers much more efficient using the reverse proxy servers.

Download this Article in PDF format

3d websites

Arashtad Custom Services

In Arashtad, we have gathered a professional team of developers who are working in fields such as 3D websites, 3D games, metaverses, and other types of WebGL and 3D applications as well as blockchain developemnet.

Arashtad Serivces
Drop us a message and tell us about your ideas.
Fill in the Form
Blockchain Development

What is a Web Server?

In this article, we will talk about one of the most important and fundamental concepts of IT, and that is a web server. We will explain what servers are, what they do, and how they do it. A server is a computer software and its underlying hardware that accepts requests via HTTP or its secure version HTTPS. HTTP is a network protocol created to distribute web content. A user can access a server via HTTP or HTTPS protocol by sending a request to the server, the server then checks the availability of the content and returns a proper answer to the request of the user. If the content is not available, the server responds with an Error message. There are many types of answers that a server will return to the user, These responses are three-digit numbers like 404 error. The request that the user sends to the server can be in the form of a URL or an address of a website. The URL is entered in a web browser, then the web browser sends the request to the server. There are 2 types of servers, static and dynamic. We will talk about more details of them throughout this article

What is a web server?

The term web server can refer to hardware, software or a computer sofware with its underlying hardware. On the hardware side, a web server is a computer that stores web server software and a website’s component files (for example, HTML documents, images, CSS stylesheets, and JavaScript files). A web server connects to the Internet and supports physical data interchange with other devices connected to the web.
On the software side, a web server includes several parts that control how web users access hosted files. At a minimum, this is an HTTP server. An HTTP server is software that understands URLs (web addresses) and HTTP (the protocol your browser uses to view webpages). An HTTP server can be accessed through the domain names of the websites it stores, and it delivers the content of these hosted websites to the end user’s device.
As explained earlier, whenever a browser requests a file via HTTP, when the request reaches the correct hardware (webserver), the software (the HTTP server) accepts the request then finds the requested file and sends it to the browser, and if the server doesn’t find the requested document, it returns the 404 error

static server vs. Dynamic server:

To create a website you need either the static web server or a dynamic one. A static web server, or stack, consists of a computer (hardware) with an HTTP server (software). We call it “static” because the server sends its hosted files as-is to your browser. On the other hand, A dynamic web server consists of a static web server plus extra software, most commonly an application server and a database. We call it “dynamic” because the application server updates the hosted files before sending content to your browser via the HTTP server.

Why do we need to host the files?

Every web page has a number of files containing the HTML documents, CSS style sheets, JavaScript codes and some files such as photos or other documents.All these files need to be stored somewhere on the web server. The first question that comes to the mind is that why not storing all these data on our own computer system? Of course this is possible but it is better that we save the files on the server because:

1. A dedicated web server is typically more available. (up and running)
2. Excluding downtime and systems troubles, a dedicated web server is always connected to the Internet.
3. A dedicated web server can have the same IP address all the time. This is known as a dedicated IP address. (not all ISPs provide a fixed IP address for home lines)
4. A dedicated web server is typically maintained by a third-party. Thus, finding a good service provider is an important part of creating a website.

What is HTTP and what does it do?

HTTP is the acronym of Hyper Text Transform Protocol and it specifies how to transfer hypertext (linked web documents) between two computers. HTTP is a texual protocol meaning that all commands are plain-text and human-readable. Also, it is a stateless protocol which means that Neither the server nor the client remember previous communications. For example, relying on HTTP alone, a server can’t remember a password you typed or remember your progress on an incomplete transaction. You need an application server for tasks like that.
HTTP provides clear rules for how a client and server communicate. Each interaction between the client and server is called a message. HTTP messages are requests or responses. Client devices submit HTTP requests to servers, which reply by sending HTTP responses back to the clients.
HTTP requests are sent when a client device, such as an internet browser, asks the server for the information needed to load the website. The request provides the server with the desired information it needs to tailor its response to the client device. Each HTTP request contains encoded data, with information such as:
1. The specific version of HTTP followed. HTTP and HTTP/2 are the two versions.
2. A URL. This points to the resource on the web.
3. An HTTP method. This indicates the specific action the request expects to receive from the server in its response. Some of the famous methods are POST and GET.
4. HTTP request headers. This includes data such as what type of browser is being used and what data the request is seeking from the server. It can also include cookies, which show information previously sent from the server handling the request.
5. An HTTP body. This is optional information the server needs from the request, such as user forms — username/password logins, short responses and file uploads — that are being submitted to the website.

HTTP Status Codes:

In response to HTTP requests, servers often issue response codes, indicating the request is being processed, there was an error in the request or that the request is being redirected. Common response codes include:
1. 200 OK. This means that the request, such as GET or POST, worked and is being acted upon.
2. 300 Moved Permanently. This response code means that the URL of the requested resource has been changed permanently.
3. 401 Unauthorized. The client, or user making the request of the server, has not been authenticated.
4. 403 Forbidden. The client’s identity is known but has not been given access authorization.
5. 404 Not Found. This is the most frequent error code. It means that the URL is not recognized or the resource at the location does not exist.
6. 500 Internal Server Error. The server has encountered a situation it doesn’t know how to handle.

What are the Proxy Servers?

Proxies, or proxy servers, are the application-layer servers, computers or other machines that go between the client device and the server. Proxies relay HTTP requests and responses between the client and server. Typically, there are one or more proxies for each client-server interaction.
Proxies may be transparent or non-transparent. Transparent proxies do not modify the client’s request but rather send it to the server in its original form. Nontransparent proxies will modify the client’s request in some capacity. Non-transparent
proxies can be used for additional services, often to increase the server’s retrieval speed.
Web developers can use proxies for the following purposes:
1. Caching. Cache servers can save web pages or other internet content locally, for faster content retrieval and to reduce the demand for the site’s bandwidth.
2. Authentication. Controlling access privileges to applications and online information.
3. Logging. The storage of historical data, such as the IP addresses of clients that sent requests to the server.
4. Web filtering. Controlling access to web pages that can compromise security or include inappropriate content.
5. Load balancing. Client requests to the server can be handled by multiple servers, rather than just one.

Conclusion

In this article, we have got familiar with the concept of web servers, what they are, what they do, and how they do it. Then we got to know about dynamic and static web servers. In addition to all of these, we introduced the HTTP protocol with the details of its functionality, status codes, and proxy servers. This article will give you the basic perspective of IT and if you want to find a career in this field. You need to learn these basics and become familiar with the said concepts.

Download this Article in PDF format

3d websites

Arashtad Custom Services

In Arashtad, we have gathered a professional team of developers who are working in fields such as 3D websites, 3D games, metaverses, and other types of WebGL and 3D applications as well as blockchain developemnet.

Arashtad Serivces
Drop us a message and tell us about your ideas.
Fill in the Form
Blockchain Development

Top 15 commands every developer must know

If you are a newbie developer or you are not sure if you know all the useful commands that will help you navigate through directories or work faster, this tutorial is for you! In this tutorial, we will see how we can navigate through the directories, create files and folders, open files, remove them or apply any other kind of manipulation by just writing one phrase or two as a command in the terminal. Terminal is not the only term that developers call when referring to the window where you enter the commands. Console, terminal, command line interface, command window, command prompt, shell, etc are all the names that can refer to a terminal depending on which operating system the user is working on. Follow along with tutorial and we will show you how these commands will make your life easier and faster as developer. Not to mention that if you use them quite often in the workplace, everyone will notice you as a pro! Notice that the default operating system that we use for this tutorial is Linux, but most of commands could be used in other operating systems as well.

What is FHS?

In nearly all the Linux based operating systems, we have the universal standard for File-system standard for directory structure known as the Filesystem Hierarchy Standard (FHS). The FHS defines a set of directories, each of which serve their own special function. The forward slash (/) is used to indicate the root directory in the filesystem hierarchy defined by the FHS. When a user logs in to the shell, they are brought to their own user directory, stored within /home/. This is referred to as the user’s home directory. The FHS defines /home/ as containing the home directories for regular users. The root user has its own home directory specified by the FHS: /root/. Note that / is referred to as the “root directory”, and that it is different from root/, which is stored within /. Since the FHS is the default filesystem layout on Linux machines, and each directory within it is included to serve a specific purpose, it simplifies the process of organizing files by their function.

What Are the navigation commands?

In most operating systems such as Linux, filesystems are based on a directory tree. This means that you can create directories (which are functionally identical to folders found in other operating systems) inside other directories, and files can exist in any directory. The first command that we will work with is pwd. To see what directory you are currently active in you can run the pwd command, which stands for “print working directory”:

What Are the navigation commands?

In most operating systems such as Linux, filesystems are based on a directory tree. This means that you can create directories (which are functionally identical to folders found in other operating systems) inside other directories, and files can exist in any directory. The first command that we will work with is pwd. To see what directory you are currently active in you can run the pwd command, which stands for “print working directory”: pwd Result: /home/mohamad As you can see the command shows the directory that you are in. On Linux, all the directories are rooted in home, meaning that the main directory is home and then every other directories are a part of it. This example output indicates that the current active directory is Mohamad which is inside the home/ directory, which lives in the root directory, /. As mentioned previously, since the Mohamad / directory is stored within the home/ directory, sammy/ represents the Mohamad user’s home directory. Now, if you want to see the list of the directories ( folders ) and files inside the current active directory, you can use ls command: ls This will return a list of the names of any files or directories held in your current working directory. If you’re following this guide and have just installed your operating system, though, this command may not return any output, because you have no folder or file in it. You can create one or more directories using the mkdir command. This command stands for make a directory and using it, you can create one or more that one folder or directory inside of your current active directory. mkdir Project1 Project2 This will create two folders with the names Project1 and Project2. If you enter one name, it will only create one folder or directory. Now, let’s check if the new directory exist by listing the contents of the current directory: ls Result: Project1 Project2 Now, If you want to enter one of these directories, you can use the cd command which stands for change directory: cd Project1 You can also enter any directory regardless of the current active directory: cd /home/Mohamad/Project1 And if you want to exit the directory to a level higher, for instance if you want to return to Mohamad, you can enter the following command: cd .. And you will get back to where you had been before entering the cd command.

What Are the File Manipulation commands?

Up to here, we have worked with navigation commands to create folders and change the directories. Now, we want to see how we can manipulate files. These files could be any type file such as a python or JavaScript file format or just a simple .txt file for writing text in it. The first command we will use, is touch. We use it for creating a new file. touch Script.py This will create a python file where you can write your own python scripts. If you want to rename it, you can use the mv command: mv Script.py NewScript.py And the name will change to NewScript.py. Now, if you would like to copy the files you can use the cp command: cp Script.py CopyOfScript.py Also, consider that you can open the file using the nano command: nano script.py There are other editors like VSCode. If you are on a VSCode terminal, you can use: code . The code . command will open up all of the files of the current active directory and show the list of them on the left hand side bar. Now, if you want to show the contents of the code inside of the terminal, you can use the cat command: cat Script.py The less command will do the same in a different manner: less Script.py If you want to remove a file, you can use rm command: rm Script.py And if you want to remove an empty directory you can use: rm -d Project1 You can also use the following command instead: rmdir Project1 And if the directory is not empty, you can use the following command: rm -r Project1 In the end if you are looking for a manual of a certain command, you can use the man command: man NameOfTheCommand This will give the manual of the command that you are looking for and help you use it in the way that you like.

Conclusion

In this tutorial, we have become familiar with the different commands we can use on Linux ( or other operating systems ) in order to interact with the directories and the files. These commands will help you quickly create or delete files, rename them, copy, and open them. Also there were commands that would create folders, remove them, and open them. These commands will make your life easier and faster as a developer. Also, if you get used to apply them instead of manually do those actions, you will look more like pro and a senior developer.

Download this Article in PDF format

3d websites

Arashtad Custom Services

In Arashtad, we have gathered a professional team of developers who are working in fields such as 3D websites, 3D games, metaverses, and other types of WebGL and 3D applications as well as blockchain developemnet.

Arashtad Serivces
Drop us a message and tell us about your ideas.
Fill in the Form
Blockchain Development

How to Load an OBJ 3D Model with its Texture in Three JS?

Most of the 3D models that we want to use in games, such as characters, furniture, closed areas, rooms, cars, etc, are designed in the OBJ format and naturally, they have a texture too. The texture is usually exported in the MTL file format. Most of the prepared free models that you will find different on websites that offer 3D models for free like Thingiverse, will give a zip file containing OBJ and MTL file formats. In this tutorial, we will focus on importing OBJ and MTL files in Three JS. This tutorial is an important part of designing your game, animation, or any other kind of web application because there are already many free 3D designs out there on the internet. You only need to import and use them in your scene. So now, let’s get started with the project. Don’t forget to find a 3D model of your choice from where ever you like. Notice that the file needs to have .obj and .mtl file formats. Some of these models have a .png or .jpg photo with them.

A simple example from scratch:

We will get started with the main elements of a Three js scene, including the camera, the renderer, the scene, and the object. Before doing that, we use the Vite plugin to easily create all the folders and files you need to run the Three.js code. First off, create a folder in the directory of your projects by using the following commands: mkdir OBJLoader
cd OBJLoader
Then, inside of your project folder, create the necessary files and folders by simply running the Vite plugin command: npm create [email protected] Then enter the name of the project. You can write the name of your project as the name. And also the package (the name is arbitrary, and you can choose anything you want). Then select vanilla as the framework and variant. After that, enter the following commands in the terminal. Notice that here OBJLoader is the project folder’s name, and thus, we have changed the directory to OBJLoader. The name depends on the name you enter in the Vite plugin : cd OBJLoader
npm install
Afterward, you can enter the JavaScript code you want to write in the main.js file. So, we will enter the base or template code for running every project with an animating object, such as a sphere. Also, do not forget to install the Three.js package library every time you create a project: npm install three

Importing the OBJ 3D model:

Now, enter the following script in the main.js file:

import * as THREE from 'three';
import { Mesh } from 'three';
import { OBJLoader } from 'three/examples/jsm/loaders/OBJLoader';
import { MTLLoader } from 'three/examples/jsm/loaders/MTLLoader';
import {OrbitControls} from "/node_modules/three/examples/jsm/controls/OrbitControls.js";
import Stats from "/node_modules/three/examples/jsm/libs/stats.module.js";

const scene = new THREE.Scene();
const pointLight = new THREE.PointLight(0xffffff,5);
pointLight.position.set(0,8000,0);
pointLight.intensity = 5;
scene.add(pointLight);

const pointLight2 = new THREE.PointLight(0xffffff,5);
pointLight2.position.set(2000,3000,4000);
pointLight2.intensity = 5;
scene.add(pointLight2);

const width = 20;
const height = 20;
const intensity = 5;
const rectLight = new THREE.RectAreaLight( 0xffffff, intensity,width, height );
rectLight.position.set( 5000, 5000, 5000 );
scene.add( rectLight );

const camera = new THREE.PerspectiveCamera(75, innerWidth/innerHeight, 0.1, 1000);
const renderer = new THREE.WebGLRenderer({
     antialias : true
});

renderer.setSize(innerWidth, innerHeight);
document.body.appendChild(renderer.domElement);
const controls = new OrbitControls(camera,renderer.domElement);

//Loading the MTL and OBJ files
var mtlLoader = new MTLLoader();
mtlLoader.load('/Models/plane.mtl',function(materials){
     materials.preload();
     var objLoader = new OBJLoader();
     objLoader.setMaterials(materials);
     objLoader.load('/Models/plane.obj',function(object){
          scene.add(object);
     });
});

camera.position.z = 15;
window.addEventListener('resize', onWindowResize, false);

function onWindowResize() {
     camera.aspect = window.innerWidth / window.innerHeight;
     camera.updateProjectionMatrix();
     renderer.setSize(window.innerWidth, window.innerHeight);
     render();
};

const stats = Stats()
document.body.appendChild(stats.dom)
function animate() {
     requestAnimationFrame(animate);
     render();
     //stats.update();
};

function render() {
     renderer.render(scene, camera);
};

animate();


Now if we save the code, and enter the following command in the terminal: npm run dev The above script will give the following result:

As you can see, we are observing the smallest details of an airplane with all the textures of the different parts.

Explaining the code:

As always, we added the necessary elements of every scene, such as the scene itself, the camera, the renderer, the orbit controls, and the animation function. Notice that we also imported all the necessary packages for our purpose. The main part of this code is related to loading the OBJ and MTL files where we first imported the MTL and inside of its function added the OBJ file as well. In other words, MTL is the material of the OBJ object. It is like creating an object and adding its material to it. Notice that most of the objects that you find on the internet, are very large or they could be in size. So you should set the lights and the camera according to the dimensions of the given object. Otherwise, you will see nothing or you have to zoom in or zoom out so much in order to see the object clearly. Also, the lighting might not display the object properly. So, you have to set the position and intensity of it, too. We have placed the models in the Models folder and have imported them according to their names. Make sure you set the names according to your files and you should have the files ready for display!

Conclusion

In this tutorial, we have managed to import an object and its texture using the OBJ loader and MTL loader in Three JS. Using such a tool, we can import as many 3D models as we want to create the animation, game, or web application of our choice using Three JS. The .mtl file format works like the material of the object which is imported in the format of .obj. Hope this tutorial has helped you import the 3D models that you like in your Three.js scenes.

Download this Article in PDF format

3d websites

Arashtad Custom Services

In Arashtad, we have gathered a professional team of developers who are working in fields such as 3D websites, 3D games, metaverses, and other types of WebGL and 3D applications as well as blockchain developemnet.

Arashtad Serivces
Drop us a message and tell us about your ideas.
Fill in the Form
Blockchain Development

What Is A Decentralized Application (Dapp)?

Most of the applications we know and work on daily are centralized applications such as Instagram, Facebook, Uber, Twitter, etc. All of these centralized applications are controlled by a central authority. This means that they can be changed, or updated. They can block or delete a particular user’s activity. In general, they can do anything they want because they have the authority over the database and the operation of the App. On the other hand, a decentralized application or a dAPP is a kind of digital application that can operate without the control of a single authority. If a Dapp is written and deployed, no one, including the creator, cannot change or control it. Decentralized Applications are based on smart contracts. The first example of a decentralized application is Bitcoin, a peer-to-peer network for operating the transactions and runs automatically on the Bitcoin Blockchain. Of course, many other smart contracts have been written for different purposes in the last decade, most of which are written in Solidity language on the Ethereum blockchain

Understanding the dApps:

A standard web application, such as Twitter, runs on a computer system that is owned and operated by an organization, giving it full authority over the app and its operations. There may be multiple users on one side, but the backend control authority is in the hand of a single organization. DApps can run on a P2P (Peer to Peer) network or a blockchain network. For example, Bitcoin, Tor, and CryptoKitties are applications that run on computers that are part of a P2P network, whereby multiple participants are consuming content, feeding or seeding content, or simultaneously performing both functions. In the context of cryptocurrencies, dApps run on a blockchain network in a public, open-source, decentralized environment and are free from control and interference by any single authority. For example, a developer can create a Twitterlike dApp and put it on a blockchain where any user can publish messages. Once posted, no one—including the app creators—can delete the messages.

Advantages of dApps:

One of the main advantages of the Dapps is the privacy of the users. In other words, in nearly all dApps, there is no need for any user data or login. This benefit helps users to keep their data not only private, but also have faster experience when using a decentralized application. You might have face the annoying process of logging in to a website before using its application. In the Dapps, there is no need to do so. And you can have a faster and safer experience. Another advantage of the decentralized application is the speed of the transactions on different blockchains. You can do any kind of monetary operation such as transferring money, lending and borrowing, and so on in a matter of seconds. Moreover the Dapps are open source and many of the infrastructures are available to quickly create your custom decentralized web application.

disadvantages of dApps:

Contrary to Centralized applications, the dApps or Dapps are in their early stage of development. There are some challenges facing the development and the use of the Dapps. First of all, these applications cannot easily be updated because most of the variables and functions on a smart contract are immutable meaning that they cannot be changes. As a result, lack of control over the probable bugs of the smart contracts can cause problem. However, there are still ways to tackle this issue by upgrading the smart contract. Another common issue of the Dapps is the security. Of course the smart contracts and blockchain are more secure than most of the centralized web applications, but there are still some problems related to the compromised metamask accounts that need to be solved. The compromised metamask accounts are the accounts that have been attacked by a hacker and are not able to act in a normal way because a program like a flash robot is sweeping all the Ethereum funds.

How to create a dapp?

We have provided a lot of tutorials about writing smart contracts, blockchain, solidity, and how to write Dapps on our blog. You can read the guidelines and get started by learning smart contracts and the languages that support them. Solidity and Rust are the two famous languages that smart contracts are written. Many blockchain smart contracts are written in solidity and are supported by the Ethereum platform. The Ethereum Dapps are powered and developed using the Ethereum platform. Ethereum dApps use smart contracts for their logic. They are deployed on the Ethereum network and use the platform’s blockchain for data storage. On the other hand, CosmWasm smart contracts are written in Rust programming language and supported by Cosmos SDK. Cosmos is another platform and blockchain that some other blockchains like Terra network use its CosmWasm smart contract templates to operate the smart contracts.

What does the dApp development process look like?

The development of a decentralized application is a staged process performed by a team of full-stack developers, blockchain engineers, and UX designers. Here are the stages a Dapp goes through before a final product can get released: 1. Business and technical analysis. The purpose of the app is established. The specialists decide how blockchain can resolve the problem and what platform to choose for that. 2. Architecture design. It is needed to create a proof of concept. Developers evaluate how different apps’ components interact with each other. 3. The creation of prototypes (low-fidelity and high-fidelity designs). 4. The creation of smart contracts and wallets. Smart contracts should execute the app’s business logic and functionality. 5. Backend and frontend final development. Here we connect our smart contract to the web application forming the final decentralized application or dApp. 6. The stage of internal smart contract audit. It is conducted to review how requirements and specifications are met before the stage of deployment, otherwise, it is harder to introduce fixes and make updates. 7. Testnet deployment. The performance is evaluated to detect potential security issues and flaws. 8. Mainnet deployment.

Download this Article in PDF format

3d websites

Arashtad Custom Services

In Arashtad, we have gathered a professional team of developers who are working in fields such as 3D websites, 3D games, metaverses, and other types of WebGL and 3D applications as well as blockchain developemnet.

Arashtad Serivces
Drop us a message and tell us about your ideas.
Fill in the Form
Blockchain Development

Specular Map Three JS: A Fantastic Tutorial

The specular map is a texture image that affects the specular surface highlight on MeshLambertMaterial and MeshPhongMaterial materials. Using the specular map, you will be able to set the shininess of a surface by giving the a grayscale value from white to black or from 0 to 255. The white points will reflect the light more and the dark points will reflect the light less. In this tutorial, we will create a sphere geometry and map the texture of the globe on then. Next, we will use the grayscale specular map of the globe to determine the shininess on the surface of the globe. In this tutorial, we will also create the a GUI to set different parameters like the shininess, the intensity of the light, the color of the light source, the material and so on. If you would like to enhance your design portfolio in Three JS, follow along with this tutorial.

A simple example from scratch:

We will get started with the main elements of a Three js scene, including the camera, the renderer, the scene, and the object. Before doing that, we use the Vite plugin to easily create all the folders and files you need to run the Three.js code. First off, create a folder in the directory of your projects by using the following commands: mkdir SpecularMap
cd SpecularMap
Then, inside of the your project folder, create the necessary files and folders by simply running the Vite plugin command: npm create [email protected] Then enter the name of the project. You can write the name of your project as the name. And also the package (the name is arbitrary, and you can choose anything you want). Then select vanilla as the framework and variant. After that, enter the following commands in the terminal. Notice that here SpecularMap is the project folder’s name, and thus, we have changed the directory to SpecularMap. The name depends on the name you enter in the Vite plugin : cd SpecularMap
npm install
Afterward, you can enter the JavaScript code you want to write in the main.js file. So, we will enter the base or template code for running every project with an animating object, such as a sphere. Also, do not forget to install the Three.js package library every time you create a project: npm install three For this project, we need to install dat.gui package as well: npm install --save dat.gui

Implementing the specular map:

Now, enter the following script in the main.js file:

import * as THREE from 'three';
import { OrbitControls } from 'three/examples/jsm/controls/OrbitControls';
import Stats from 'three/examples/jsm/libs/stats.module';
import { GUI } from 'dat.gui';
const scene = new THREE.Scene();
const light = new THREE.PointLight(0xffffff, 2);
light.position.set(0, 5, 10);
scene.add(light);
const camera = new THREE.PerspectiveCamera(
     75,
     window.innerWidth / window.innerHeight,
     0.1,
     1000
);
camera.position.z = 3;
const renderer = new THREE.WebGLRenderer();
renderer.setSize(window.innerWidth, window.innerHeight);
document.body.appendChild(renderer.domElement);
const controls = new OrbitControls(camera, renderer.domElement);
controls.screenSpacePanning = true;

const SphereGeometry = new THREE.SphereGeometry(1,50,50);
const material = new THREE.MeshPhongMaterial();
const texture = new THREE.TextureLoader().load('img/globe.jpg');
material.map = texture;
const specularTexture = new THREE.TextureLoader().load('img/SpecularMap.jpg');
material.specularMap = specularTexture;
const globe = new THREE.Mesh(SphereGeometry, material);
scene.add (globe);

window.addEventListener('resize', onWindowResize, false);
function onWindowResize() {
     camera.aspect = window.innerWidth / window.innerHeight;
     camera.updateProjectionMatrix();
     renderer.setSize(window.innerWidth, window.innerHeight);
     render();
};

const stats = Stats();
document.body.appendChild(stats.dom);
const options = {
     side: {
          FrontSide: THREE.FrontSide,
          BackSide: THREE.BackSide,
          DoubleSide: THREE.DoubleSide,
     },
     combine: {
          MultiplyOperation: THREE.MultiplyOperation,
          MixOperation: THREE.MixOperation,
          AddOperation: THREE.AddOperation,
     }
};

const gui = new GUI();
const materialFolder = gui.addFolder('THREE.Material');
materialFolder.add(material, 'transparent');
materialFolder.add(material, 'opacity', 0, 1, 0.01);
materialFolder.add(material, 'depthTest');
materialFolder.add(material, 'depthWrite');

materialFolder.add(material, 'alphaTest', 0, 1, 0.01).onChange(() => updateMaterial());
materialFolder.add(material, 'visible');
materialFolder.add(material, 'side', options.side).onChange(() => updateMaterial());

const data = {
     color: material.color.getHex(),
     emissive: material.emissive.getHex(),
     specular: material.specular.getHex(),
};

const meshPhongMaterialFolder = gui.addFolder('THREE.MeshPhongMaterial');

meshPhongMaterialFolder.addColor(data, 'color').onChange(() => {
     material.color.setHex(Number(data.color.toString().replace('#', '0x')))
});
meshPhongMaterialFolder.addColor(data, 'emissive').onChange(() => {
     material.emissive.setHex(Number(data.emissive.toString().replace('#', '0x'));
     )
});
meshPhongMaterialFolder.addColor(data, 'specular').onChange(() => {
     material.specular.setHex(Number(data.specular.toString().replace('#', '0x'));
     )
});

meshPhongMaterialFolder.add(material, 'shininess', 0, 1024);
meshPhongMaterialFolder.add(material, 'wireframe');
meshPhongMaterialFolder.add(material, 'flatShading').onChange(() => updateMaterial());
meshPhongMaterialFolder.add(material, 'combine', options.combine).onChange(() => updateMaterial());
meshPhongMaterialFolder.add(material, 'reflectivity', 0, 1);
meshPhongMaterialFolder.open();

function updateMaterial() {
     material.side = Number(material.side);
     material.combine = Number(material.combine);
     material.needsUpdate = true;
};

function animate() {
     requestAnimationFrame(animate);
     globe.rotation.y += 0.01;
     render();
     stats.update();
};

function render() {
     renderer.render(scene, camera);
};

animate();


Now if we save the code, and enter the following command in the terminal: npm run dev The above script will give the following result:

As you see, we have a rotating globe with the effect of the sun light on the oceans and not on the lands, which is the effect of the specular map. You can set different parameters on the GUI. For instance you can change the shininess of the light source, and also the color of it.

The Specular Map:

Before we get into the details of the code, it is important to notice that we should create a folder in the project directory and call it img. Then inside of that folder paste in the below images related to the texture of the globe and the specular map of it

Explaining the code:

As always, we added the necessary elements of every scene, such as the scene itself, the camera, the renderer, the material, the geometry, the orbit controls, and the animation function. Notice that we also imported all the necessary packages for our purpose. The main part of this code is related to the type of the material, the light source, and the texture mapping. We also wrote many lines of code related to the GUI ( you can skip that part of the script, and yet you will get the same result with the difference that you cannot change the options using the GUI.) We used the main texture of the globe (the RGB photo or the map of the globe) and the specular map containing the RGB values related to the shininess of the surface of the texture, which also represents the land and the sea. For the material, you can use either the Lambert or the Phong material to be able to get the effect that you want. For the light source we should preferably use the point light to be able to simulate the sun light.

Conclusion

In this tutorial, we have managed to create a realistic globe with the shininess effect of the sunlight on the seas and a weaker effect of such on the lands using the specular map in Three JS. Moreover, we created a GUI for the user to set the preferred shininess on the object’s surface (globe). We learned what kind of material, color, light source, and mapping we needed to get the different shininess effects on the different sections of the surface of an object. This was an excellent practice to get the effect we wanted on the surface of various objects with the same mapping.

Download this Article in PDF format

3d websites

Arashtad Custom Services

In Arashtad, we have gathered a professional team of developers who are working in fields such as 3D websites, 3D games, metaverses, and other types of WebGL and 3D applications as well as blockchain developemnet.

Arashtad Serivces
Drop us a message and tell us about your ideas.
Fill in the Form
Blockchain Development

How to load STL 3d models in Three JS

The number of objects that you can design in three.js very limited and nearly all of them are some basic geometries like cube, sphere, cylinder, torus, and so on. We all know that there are a tone of various models that can be created using designing software and platforms like Blender. You can use these 3D models to create a web based game or animation or any kind of 3D interactive UX designs. Any of these 3D models have a specific kinds of file format such as STL, OBJ, FBX, PLY, GLTF, BLEND and so on. So, we need to be able to load all of these files in order to use them in Three JS. In This tutorial, we will get familiar with the STL loader in three.js and learn about the details of it. You can find different 3D models for free on websites like CGTrader.com, Thingiverse, and so on. You can also find a tone 3D models of human characters, animals, and all kinds of different objects. A great job you can do is to further modify these models in 3D softwares like Blender and add clothes to the human characters. We will soon have a blog how to create the 3D model of yourself using your 2D image captured by your phone.

A simple example from scratch:

We will get started with the main elements of a Three js scene, including the camera, the renderer, the scene, and the object. Before doing that, we use the Vite plugin to easily create all the folders and files you need to run the Three.js code. First off, create a folder in the directory of your projects by using the following commands: mkdir STLLoader
cd STLLoader
Then, inside of the your project folder, create the necessary files and folders by simply running the Vite plugin command: npm create [email protected] Then enter the name of the project. You can write the name of your project as the name. And also the package (the name is arbitrary, and you can choose anything you want). Then select vanilla as the framework and variant. After that, enter the following commands in the terminal. Notice that here STLLoader is the project folder’s name, and thus, we have changed the directory to STLLoader. The name depends on the name you enter in the Vite plugin : cd STLLoader
npm install
Afterward, you can enter the JavaScript code you want to write in the main.js file. So, we will enter the base or template code for running every project with an animating object, such as a sphere. Also, do not forget to install the Three.js package library every time you create a project: npm install three

The code:

Now, enter the following script in the main.js file:

import * as THREE from 'three';
import { OrbitControls } from '/node_modules/three/examples/jsm/controls/OrbitControls';
import { STLLoader } from '/node_modules/three/examples/jsm/loaders/STLLoader';
import Stats from '/node_modules/three/examples/jsm/libs/stats.module';
const scene = new THREE.Scene();
const light = new THREE.SpotLight();
light.position.set(20, 20, 20);
scene.add(light);
const camera = new THREE.PerspectiveCamera(
     75,
     window.innerWidth / window.innerHeight,
     0.1,
     1000
);
camera.position.z = 3;
const renderer = new THREE.WebGLRenderer();
renderer.outputEncoding = THREE.sRGBEncoding;
renderer.setSize(window.innerWidth, window.innerHeight);
document.body.appendChild(renderer.domElement);
const controls = new OrbitControls(camera,renderer.domElement);
const material = new THREE.MeshStandardMaterial({
     color: 0xffffff,
     metalness: 0.35,
     roughness: 0.1,
     opacity: 1.0,
     transparent: true,
     transmission: 0.99,
     clearcoat: 1.0,
     clearcoatRoughness: 0.25
});
const loader = new STLLoader();
loader.load(
     'models/3DModel.stl',
     function (geometry) {
          const mesh = new THREE.Mesh(geometry, material);
          scene.add(mesh);
     },
     (xhr) => {
          console.log((xhr.loaded / xhr.total) * 100 + '% loaded');
     },
     (error) => {
          console.log(error);
     }
);

window.addEventListener('resize', onWindowResize, false);

function onWindowResize() {
     camera.aspect = window.innerWidth / window.innerHeight;
     camera.updateProjectionMatrix();
     renderer.setSize(window.innerWidth, window.innerHeight);
     render();
}

const stats = Stats();
document.body.appendChild(stats.dom);

function animate() {
     requestAnimationFrame(animate);
     controls.update();
     render();
     stats.update();
};

function render() {
 renderer.render(scene, camera);
};

animate();


Now if we save the code, and enter the following command in the terminal: npm run dev The above script will give the following result:

Explaining the code:

Before explaining the code, you might ask how to design the above 3D model. Or where should I get it from? First of all, the above 3D model has been designed in Blender by the Arashtad team, and we have provided the guidelines for creating such a lattice structure in one of the blog tutorials called “Different Kinds of Lattice Structure Using Blender”. Secondly, you can download any 3D model you want from the Thingiverse.com website for free, and here, we only want to load a .stl 3D model and visualize it in Three.js. The above code is like any Three JS boilerplate script. At first, we declared the scene, the camera, and the renderer and did all the routine stuff on them. Afterward, we defined the material with all its properties. Next, we loaded the STL file from the models folder (Where we had already pasted our STL 3D model). Finally, we added the animation and render function. Notice that you should either rename the name of your 3D model to 3DModel or enter its name of it in the loader section of the code.

Conclusion

In this tutorial, we learned how to load an STL model in Three JS and visualize it in a fantastic way. You can read our articles about Blender on our blog and learn how to design 3D models so that you can later visualize beautifully in three.js. Moreover, you can download your preferred 3D models from different websites that offer these models for free. There are plenty of file formats that you can store a 3D model. And in Three JS, we can import nearly all of these 3D model file formats. We will cover more detailed files like OBJ with their textures, and also GLTF file format in the future tutorials. Hope you have enjoyed this tutorial!

Download this Article in PDF format

3d websites

Arashtad Custom Services

In Arashtad, we have gathered a professional team of developers who are working in fields such as 3D websites, 3D games, metaverses, and other types of WebGL and 3D applications as well as blockchain developemnet.

Arashtad Serivces
Drop us a message and tell us about your ideas.
Fill in the Form
Blockchain Development

Point Cloud Effect in Three JS

This tutorial will focus on an exciting topic called a point cloud. Technically, we use a point cloud for creating the polygons. Then using the normal of the polygons, we make the meshes. In Three JS, the point cloud is also used for aesthetics, which means we use the points instead of meshes to represent an object or a geometry. This style of representing the objects makes the design more beautiful and charming than the mesh representation. We achieve this goal by simply using the points material and adding the geometry and the material to the points instead of the mesh. Point cloud representation makes your design look more modern and smart. You can use this effect for technological and innovative items and products, giving them a more modern look. Moreover, the point-cloud-based design could provide a fantastic look for any other type of website other than technology or product based. Let’s give it a shot together!

A simple example from scratch:

We will get started with the main elements of a Three js scene, including the camera, the renderer, the scene, and the object. Before doing that, we use the Vite plugin to easily create all the folders and files you need to run the Three.js code. First off, create a folder in the directory of your projects by using the following commands: mkdir PointCloud
cd PointCloud
Then, inside of the your project folder, create the necessary files and folders by simply running the Vite plugin command: npm create [email protected] Then enter the name of the project. You can write the name of your project as the name. And also the package (the name is arbitrary, and you can choose anything you want). Then select vanilla as the framework and variant. After that, enter the following commands in the terminal. Notice that here PointCloud is the project folder’s name, and thus, we have changed the directory to PointCloud. The name depends on the name you enter in the Vite plugin : cd PointCloud
npm install
Afterward, you can enter the JavaScript code you want to write in the main.js file. So, we will enter the base or template code for running every project with an animating object, such as a sphere. Also, do not forget to install the Three.js package library every time you create a project: npm install three

The code (using the mappings together):

Now, enter the following script in the main.js file:

import * as THREE from 'three';
import { Mesh } from 'three';
import { OrbitControls } from '/node_modules/three/examples/jsm/controls/OrbitControls.js';
import Stats from '/node_modules/three/examples/jsm/libs/stats.module.js';
const scene = new THREE.Scene();
const camera = new THREE.PerspectiveCamera(75, innerWidth /innerHeight , 0.1, 1000);
const renderer = new THREE.WebGLRenderer({
     antialias : true
});
renderer.setSize(innerWidth, innerHeight);
document.body.appendChild(renderer.domElement);

//creating a TorusKnot
const geometry = new THREE.TorusKnotGeometry(10, 3, 300, 20 );

const material = new THREE.PointsMaterial({
     //color:0xffff00,
     size: 0.1 
})

const TorusKnot = new THREE.Points(geometry,material);
scene.add(TorusKnot);
camera.position.z = 15;
new OrbitControls(camera, renderer.domElement);
window.addEventListener('resize', onWindowResize, false);

function onWindowResize() {
     camera.aspect = window.innerWidth / window.innerHeight;
     camera.updateProjectionMatrix();
     renderer.setSize(window.innerWidth, window.innerHeight);
     render();
}

const stats = Stats();
document.body.appendChild(stats.dom);
function animate() {
     requestAnimationFrame(animate);
     TorusKnot.rotation.y += 0.002;
     render();
     stats.update();
}
function render() {
     renderer.render(scene, camera);
}

animate();


Now if we save the code, and enter the following command in the terminal: npm run dev The above script will give the following result:

Explaining the code:

To create the above point cloud effect, we made very few changes to a boilerplate code that we used to copy in all of our three.js scripts. One of these changes was using the PointsMaterial instead of MeshBasicMaterial. Then, by using the size property of the material, we set the size of the points to have enough visibility afterward, instead of using THREE.Mesh, we used THREE.Points to create the objects.

const geometry = new THREE.TorusKnotGeometry(10, 3, 300, 20 );
const material = new THREE.PointsMaterial({
     //color:0xffff00,
     size: 0.1 
})
const TorusKnot = new THREE.Points(geometry,material);


In the below photo, you can see another photo of the torus knot with point cloud effect from another angle:

Conclusion

In this tutorial, we learned how to create the point cloud effect using the points material in Three.js. This effect is very useful for designing modern websites and the concept of modern technology marketing. Creating such an effect is simple and, at the same time, clever. An object’s point cloud representation differs from the PCL loader or PCDLooader function, which loads the point cloud files. There are moments when we have the point cloud file of an object, and we want to import it, which is different from creating the points out of a geometry built in Three.js. If you have an object that you want to import and represent in the point cloud form, make sure you read our articles on the Loaders subject on this blog.

Download this Article in PDF format

3d websites

Arashtad Custom Services

In Arashtad, we have gathered a professional team of developers who are working in fields such as 3D websites, 3D games, metaverses, and other types of WebGL and 3D applications as well as blockchain developemnet.

Arashtad Serivces
Drop us a message and tell us about your ideas.
Fill in the Form
Blockchain Development