Step-wise guide to setup MySQL Replication

Following my previous blog post, where I have discussed how to install multiple MySQL server instances, we are ready now to move on setting up replication.

What is replication?
– Replication meaning, capability that allows the databases on one server(the master) to be made available on another servers(the slaves).

MySQL supports replication capabilities that allow the databases on one server to be made available on another server instances.

MySQL replication uses a master/slave architecture:
– The server that manages the original databases is the master.
– Any server that have the copy of the original databases are slaves.
– A given master server can have many slaves, but a slave can have only a single master.

Replication Topology
mysql_replication

Explanation:
1) There are three parallel threads working, one on the server host and remaining two(IO thread and SQL Thread) on the slave host.

2) The task of the thread on the server host, is to write all the events to a file called binary log(make sure binary recording is enabled on the server host).

3) IO Thread interact with the server host and reads all the events being recorded with specific replication co-ordinate and writes it to its local file called relay log.

4) SQL thread reads all non-run events from the relay log and applies to the slave database.

Setting up Replication:
– Before starting the copying process there needs to a starting replication co-ordinates where all the databases(master and slave) have the exact clone. The easiest way to do is to dump-out the latest database state and dump in to all the slaves.

 mysqldump --all-databases --master-data=2 > dump_file

Assuming that binary logging is enabled, the –master-data=2 option causes the dump file to include a comment containing a CHANGE MASTER statement that indicates the replication coordinates as of the time of the backup. These coordinates can be used later when you tell the slave where to begin replicating in the master’s binary log.

Copy the dump file to the replication slave host and load it into the MySQL server on that machine:

mysql < dump_file

Master Configurations:
– master should be recording binary logging. It can done like(set where bin-log files reside)

              SET log_bin = /var/log/mysql/mysql-bin.log
          

– Create a replication user on the master host and allow for replication access only.

              CREATE USER `replicator`@localhost IDENTIFIED BY `password`;
              GRANT REPLICATION SLAVE ON *.* TO replicator@localhost;
          

and that’s all from master side.

Slave Configurations:
– set the location for relay-log files. Add this line to the slaves configuration file:

              
              relay-log = /var/lib/mysql/slave-relay.log
          

NB: this is optional, if we don’t set the location files gonna be stored to the default location(inside data directory)

Linking master with the slaves:
Now we need to instruct the slave which Master to connect to. We also tell slave where to start reading on binary-log file.

              
CHANGE MASTER TO
MASTER_HOST = 'localhost',
MASTER_USER = 'replicator',
MASTER_PORT = 3306,
MASTER_PASSWORD = "password",
MASTER_LOG_FILE = 'mysql-bin.000001',
MASTER_LOG_POS = 120;

NB: The hostname is the host where the master server is running. The username and pass-word are those for the slave account that you set up on the master. The log file and position are the replication coordinates in the master’s binary log. (You can get these from the CHANGE MASTER statement near the beginning of the dump file.)

   
START SLAVE

We can checkout a list of replication slaves currently registered with the master with this command:

SHOW SLAVE HOSTS;

And we are done. The slave should connect to the master and begin replicating updates that the master sends to it.

NB: remember we have multiple MySQL instances we can setup replication in our local host to test. 🙂

Step-wise guide to setup MySQL Replication

Install Multiple MySQL Server Instances to the same host

Usually its very common to have one server instance per host, though its also possible to install multiple server instances(same or different server version) on the same host. The very common reason being when we want to test the new MySQL release version on the same development environment before actual migration. Managing multiple servers is a more complex undertaking than running a single server because we have to make sure that the server do not interfere with each other. None of these servers can share resources that must be used exclusively by a single server.

These resources include the following:

1) Each server have to have its own data directory.
2) Each server must be having its own network interfaces, such as TCP/IP port, named pipe or shared memory(on windows), and Unix socket file(on Unix). Though they can share the same hostname/IP unless they are listening to different port.
3) Service name must be unique for windows platform.
4) Each server must have its own logs/PID files.

Following my previous blog, MySQL installation with memcache, get you an idea how to install server instances with binaries. Lets move ahead with this.

We can follow the steps for the first server installation leaving the defaults as:
– installation dir /usr/local/mysql/ and
– data dir to /usr/local/mysql/data
– socket file location /tmp/mysql.sock
– service name /etc/init.d/mysql
– port no: 3306

The server could be handled like:

/etc/init.d/mysql [start/restart/reload/stop/status]

To install another server instance, we are going to follow the step as previous keeping in mind it wouldn’t interfere with the previous server resources.

cd ~/mysql-5.6.12
mkdir bld2
cd bld2/
cmake
    -DBUILD_CONFIG=mysql_release
    -DWITH_INNODB_MEMCACHED=ON
    -DCMAKE_INSTALL_PREFIX=/usr/local/mysql1/-DMYSQL_TCP_PORT=3307
    -DMYSQL_UNIX_ADDR=/usr/local/mysql1/mysql.sock
    -DSYSCONFDIR=/usr/local/mysql1
    -DMYSQL_DATADIR=/usr/local/mysql1 .. 

make 
make install 
... 
... 

NB: we can omit the memcache plugin inclusion part. Please also note here that default installation directory being changed to some new location /usr/local/mysql1/ so as to keep separate copy of data-dir.

Copy the server as unique service name.

cp support-files/mysql.server /etc/init.d/mysql1

Create a new configuration file(my.cnf) specifically only for this server under installation dir(to make sure resources are distinct)

sudo vim my.cnf

and change:

port = 3307
socket = /tmp/mysql1.sock
pid-file = /usr/local/mysql1/data/mysql1.pid
general_log_file= /var/log/mysql1/mysql.log
log_error = /var/log/mysql1/error.log

NB: I have used mysql1 everywhere for making this server-data distinct, any unique string combinations can be used. If we also need server_id, log_bin make them unique too (we’ll discuss these in my next post).

we have distinguished the resources and ready to start the new server.

sudo /etc/init.d/mysql1 start

multiple server.png  1366×768 and we are done. We can manage(start / status / reload / restart / stop) the new server same like as other with new server name(mysql1).

NB: if the server failed to start look at mysqld log (which resides inside data-dir) and for other errors, have a look at new server’s error/general log.

Thanks for reading 🙂

 

Install Multiple MySQL Server Instances to the same host

How to install memcached plugin with MySQL 5.6

MySQL installer(deb/yum packages etc.) doesn’t come with memcached feature enabled by default(at-least till the time of writing the post). So as to work memcached together with MySQL server, the source code needs to be re-compiled from source to include all the necessary libraries to be included required for memcache plugin.

I am on the Ubuntu 12.04, 32 bit system. I am sure the steps are more of less the same for other operating systems.

Here are the step-by-step guide.
Prerequisite:
Install libevent library thats gonna used by memcached.

sudo apt-get install libevent-dev

1. Download the latest .zip or .tar compressed package from MySQL’s official site.
2. Unzip the package.
3. Navigate to the package directory.

cd mysql-5.6.12
mkdir bld
cd bld/

4. You’ll gonna need cmake to compile the source package.

  • Install cmake if needed
    sudo apt-get install cmake
  • Configure the source as
    cmake -DBUILD_CONFIG=mysql_release -DWITH_INNODB_MEMCACHED=ON ..

NB: -DWITH_INNODB_MEMCACHED=ON is being used to enable memcache. You can optionally pass in other options if you like.When you build MySQL server, build with -DWITH_INNODB_MEMCACHED=ON. This will generate two shared libraries in the MySQL plugin directory that are required to run InnoDB memcached:

  • libmemcached.so: the memcached daemon plugin to MySQL.
  • innodb_engine.so: an InnoDB API plugin to memcached.

5. Compile the source distribution and Install it. Set the correct account permissions

make
make install

6. Change location into the top-level directory of your MySQL installation. You will find several files and subdirectories in the directory. The most important for installation purposes are the bin and scripts sub-directories.

cd /usr/local/mysql
chown -R mysql .
chgrp -R mysql .

NB: Create user/group named “mysql” if that doesn’t already exist.
7. Now run the setup script set the correct permissions. Most of the MySQL installation can be owned by root if you like. The exception is that the data directory must be owned by mysql. To accomplish this, run the following commands as root in the installation directory. For some distribution types, the data directory might be named var rather than data; adjust the second command accordingly.

scripts/mysql_install_db --user=mysql
chown -R root .
chown -R mysql data

8. We’re setting up this service as mysql.server, so lets copy in the init.d settings.

cp support-files/mysql.server /etc/init.d/mysql

9. Start the MySQL server:

bin/mysqld_safe --user=mysql &

10. Set-up root password

bin/mysql_secure_installation

11. Now we are on a position so that we can install and enable memcache plugin but before we can enable the interface we need to create the configuration tables that the plugin will use. Back at the terminal:

bin/mysql -uroot -p < share/innodb_memcached_config.sql

This creates a new database named innodb_memcache containing three tables.

  • cache_policies
  • config_options
  • containers

It also creates a demo_test table in the test database.
12. Here we go, login to mysql client

bin/mysql -uroot -p
mysql> INSTALL PLUGIN daemon_memcached SONAME 'libmemcached.so';
mysql> show plugins;
show_plugins
memcached deamon in the plugin list

13. Exit back out to the shell and run:

sudo netstat -tap

process list
Notice that mysql is listening on port 11211 now. This is the memcache interface up and running.

NB: The bin directory contains client programs and the server. You should add the full path name of this directory to your PATH environment variable so that your shell finds the MySQL programs properly.

And thats it, we are done. 🙂

How to install memcached plugin with MySQL 5.6

MySQL Event Scheduler

MySQL event scheduler
MySQL event scheduler

         

 

     

     

     

       

     

Mysql Event Scheduler, as name suggests(yes you guessed it right :)), some thing which calls events periodically like wise we are having it on linux cron job.

In this Article we’ll see how to set up and run event scheduler on your MySQL server.

Lets get started… 🙂

      1. What is Mysql Event Scheduler?

     Everybody has used/using the unix cron job or atleast heard the term, the cron is a technology that automatically calls scripts at regular intervals when we set it to call. Think of a situation we have a complex database logics to be executed at intervals periodically or at a perticular timestamp, we can acheive the same in MySql. Sounds Interesting :). MySql has a thread that manages the event scheduled jobs called “MySql Event Scheduler”.

      2. What we require?

     For starting the mysql event schedular to work, we need to know the current status of the global variable “event_scheduler”, that determines whether the Event Scheduler is enabled and running on the server. The variable can accomodate three values(ON,OFF or DISABLED).

ON set the schedular thread to start working, OFF(default) set it to off, (as names suggest, simple). We need to put the global variable set to ON.

mysql>SET GLOBAL event_scheduler = ON; 
or
mysql>SET @@global.event_scheduler = ON; 
mysql> show global variables like 'event_scheduler';
+-----------------+-------+
| Variable_name | Value |
+-----------------+-------+
| event_scheduler | ON |
+-----------------+-------+
1 row in set (0.01 sec)
or
mysql>select @@event_scheduler;
+-------------------+
| @@event_scheduler |
+-------------------+
| ON |
+-------------------+
1 row in set (0.00 sec)

NB: This requires the SUPER privilege.

         3. Creating our Event:

         Its time now to prepare our simple MySql event. We need to have permission of the database we are creating an event for.

CREATE
    EVENT
    IF NOT EXISTS
    my_simple_mysql_event
        ON SCHEDULE AT CURRENT_TIMESTAMP + INTERVAL 10 MINUTE
        ON COMPLETION NOT PRESERVE	
    DO 
	INSERT INTO
 	test_table
	(id,event_name,createdtime)
	VALUES
	(null,'my_simple_mysql_event',CURRENT_TIMESTAMP)
;
Query OK, 0 rows affected (0.00 sec)

NB: We need to select a database which we want to trigger the event for. If no database selected it applies on the default database.

The syntax is self descriptive isn’t it? That we are creating an event (if its not already present ofcourse in the same database) with a name my_simple_mysql_event, which we are scheduling to insert a new row to the test_table table at timestamp 10 minutes from now. 🙂 thats it.

Our event automatically being called by the MySql engine at the scheduled timestamp.
Want to know the meta-data of the event. 🙂
There are three options we can go with. The most simpler is typing:

mysql> show events\G

*************************** 1. row ***************************

                  Db: Test_DB
                Name: my_simple_mysql_event
             Definer: root@localhost
           Time zone: SYSTEM
                Type: ONE TIME
          Execute at: 2013-01-20 01:04:23
      Interval value: NULL
      Interval field: NULL
              Starts: NULL
                Ends: NULL
              Status: ENABLED
          Originator: 1
character_set_client: latin1
collation_connection: latin1_swedish_ci
  Database Collation: latin1_swedish_ci

1 row in set (0.00 sec)

Inspite of the simple command, we can go with either

  • ‘event’ table from the mysql database.
  • ‘events’ table from the information_schema database.

Want to know the state of your event :). whether our event at all in the running queue or not.

mysql> show processlist\G

*************************** 1. row ***************************

     Id: 98
   User: root
   Host: localhost
     db: Test_DB
Command: Query
   Time: 0
  State: NULL
   Info: show processlist

*************************** 2. row ***************************

     Id: 99
   User: event_scheduler
   Host: localhost
     db: NULL
Command: Daemon
   Time: 14
  State: Waiting for next activation
   Info: NULL
2 rows in set (0.00 sec)

NB: If we restart your mysql server(mysqld)

  1. We need to set again the global varible ‘event_scheduler’ because it get reset every time you shut off our mysql server.
  2. Our event will not be there in the scheduled queue anymore even if its present in the events table. We need to restart our event as well.

If we see ‘event_scheduler'(User) is ‘Waiting for next activation’ (State) and its a Daemon process.
After 10 minutes…..

mysql> select * from test_table where 1;
 +----+-----------------------+------------------------------------+
 | id | event_name | createdtime |
 +----+-----------------------+------------------------------------+
 | 1 | my_simple_mysql_event | 2013-01-20 01:04:23 |
 +----+--------------+---------------------------------------------+
1 row in set (0.00 sec)

🙂 our first row created
Inspite of creating an event thats being called at a particular timestamp we can create events of recurring type as well, which is being called after regular intervals.

CREATE
    EVENT
    IF NOT EXISTS
    my_simple_mysql_event
    ON SCHEDULE EVERY 10 MINUTE 
	STARTS CURRENT_TIMESTAMP + INTERVAL 2 MINUTE 
	ENDS CURRENT_TIMESTAMP + INTERVAL 12 MONTH
    ON COMPLETION NOT PRESERVE	
    DO 
	INSERT INTO
 	test_table
	(id,event_name,createdtime)
	VALUES
	(null,'my_simple_mysql_event',CURRENT_TIMESTAMP)
;

Query OK, 0 rows affected (0.00 sec)

We just replaced ON with EVERY, STARTS and ENDS.
Which says the event starts 2 minutes from now recurred at every 10 minutes and ends at 12 months from now. Simple 🙂

Its meta data looks like this which has an start and end date and type turns to RECURRING.

mysql> show events\G

*************************** 1. row ***************************

                  Db: Test_DB
                Name: my_simple_mysql_event
             Definer: root@localhost
           Time zone: SYSTEM
                Type: RECURRING
          Execute at: NULL
      Interval value: 10
      Interval field: MINUTE
              Starts: 2013-01-18 21:24:54
                Ends: 2014-01-18 21:22:54
              Status: ENABLED
          Originator: 1
character_set_client: latin1
collation_connection: latin1_swedish_ci
  Database Collation: latin1_swedish_ci

1 row in set (0.00 sec)

            4. Altering our Event:

The definition of an existing event can be changed by means of the ALTER EVENT statement. The syntax for each of the DEFINER, ON SCHEDULE, ON COMPLETION, COMMENT, ENABLE / DISABLE, and DO clauses is exactly the same as when used with CREATE EVENT. Even we can rename the event as well.

mysql>ALTER
    EVENT
    IF NOT EXISTS
    my_simple_mysql_event
    EVERY 30 MINUTE 
	STARTS CURRENT_TIMESTAMP + INTERVAL 2 MINUTE 
	ENDS CURRENT_TIMESTAMP + INTERVAL 24 MONTH
    RENAME TO renamed_event
    DO 
	INSERT INTO
 	test_table
	(id,event_name,createdtime)
	VALUES
	(null,'my_simple_mysql_event',CURRENT_TIMESTAMP)
;
Query OK, 0 rows affected (0.00 sec)

If we see our existing events frequency is being changed to 30 min, end time changed to 24 months and its being renamed to ‘renamed_event’.

        5. Droping our Event: 

        When a scheduled event is no longer needed, it can be deleted from the server by its definer using the DROP EVENT statement.

mysql> DROP EVENT IF EXISTS renamed_event;
Query OK, 0 rows affected (0.00 sec)

    Other than the statement if we want our event gets deleted automatically after its task get completed, use the clause ‘ON COMPLETION NOT PRESERVE’ in the create event statement.

This document is only for the brief idea about one of the coolest items present in MySql :).

You can get more details about this here:

http://dev.mysql.com/doc/refman/5.1/en/events.html

MySQL Event Scheduler

VARCHAR MySQL Joining

Often we have to join MySQL tables with CHARACTER columns. Here are some reasons where we may lag behind with the performance compared to INT fields.

With the increasing length CHAR keys performance degrade significantly. Short index values can be processed more quickly than long ones. As a result, when you index a column, ask whether it’s sufficient to index partial column values rather than complete values. This technique of indexing a column prefix can be applied to string data types.

Example: Consider the table definition:

CREATE TABLE t
(
name CHAR(255),
INDEX (name)
);

If you index all 255 characters of the values in the name column, index processing will be relatively slow as:

  • It’s necessary to read more information from disk.
  • Longer values take longer to compare.
  • The index cache is not as effective because fewer key values fit into it at a time.

It’s often possible to overcome these problems by indexing only a prefix of the column values. For example, if you expect column values to be distinct most of the time in the first 15 characters, index only that many characters of each value, not all 255 characters. To specify a prefix length for a column, follow the column name in the index definition by a number in parentheses. The following table definition is the same as the previous one, except that key values in the index use only the first 15 characters of the column values:

CREATE TABLE t
(
name CHAR(255),
INDEX (name(15))
);

Indexing a column prefix can speed up query processing, but works best when the prefix values tend to have about the same amount of uniqueness as the original values. Don’t use such a short prefix that you produce a very high frequency of duplicate values in the index.

It might require some testing to find the optimal balance between long index values that provide good uniqueness versus shorter values that compare more quickly but have more duplicates.

To determine the number of records in the table, the number of distinct values in the column, and the number of duplicates, uses this query:

SELECT
COUNT(*) AS ‘Total Rows’,
COUNT(DISTINCT name) AS ‘Distinct Values’,
COUNT(*) – COUNT(DISTINCT name) AS ‘Duplicate Values’
FROM t;

The query gives you an estimate of the amount of uniqueness in the name values. Then run a similar query on the prefix values:

SELECT
COUNT(DISTINCT LEFT(name,n)) AS ‘Distinct Prefix Values’,
COUNT(*) – COUNT(DISTINCT LEFT(name,n)) AS ‘Duplicate Prefix Values’
FROM t;

That tells you how the uniqueness characteristics change when you use an n-character prefix of the name values. Run the query with different values of n to determine an acceptable prefix length.

NB: If an index on a full column is a PRIMARY KEY or UNIQUE index, you’ll probably have to change the index to be non-unique if you decide to index prefix values instead. When you index partial column values, it’s more likely that the prefix values will contain duplicates. 

Now if the character columns we have are properly indexed joining done on the fields perform significantly well :)

VARCHAR MySQL Joining

OAuth made easy

OAuth 2.OOAuth 2.0

The article explains you why we actually need to learn OAuth, what is this beast actually, how we implement it to our site, an example and whats the difference between OAuth 1.0 and OAuth 2.0

Hope you enjoy learning the article instead of getting bored.
I tried to explain things in more pictorial way as much i can rather than boring words and paragraphs 😉

Lets get started

Agenda for the article

  1. Why we learn this?
  2. What is this ‘beast’?
  3. How we implement this? Can you show me an example?
  4. Difference between Oauth 1.0 and Oauth 2.0

Why we learn this?

Think this situation, you have an account in “www.printfast.com”, you are to print photos. And your photos is on picassa(quite possible).

authentication without OAuth
authentication without OAuth

Whats are the limitations with password?

  • Trust(well thats crazy:))
  • More access than required
  • No granular support
  • Phishing, as we are sending sensitive data to a 3rd party.
  • Lower reliablity to API interface
  • Unable to revoke permission from a particular client if you are more than one clients associated

I am sure you don’t want to expose your password neither am I. 🙂
Lets start again, this time with Oauth.

authentication with OAuth
authentication with OAuth

This time we haven’t shared our picassa password(to printfast.com) though we are able to get our photos from picassa. That’s OAuth 🙂

That’s merely one usage of OAuth, (haven’t you seen websites, saying login with facebook/google. That’s another, there are lot of others.

What is this ‘beast’ exactly?
The OAuth(short for Open Authentication)is required for “delegating access”
    – to a certain third-party application
    – for a certain time
    – for the selected resources
    – to which access can be selectively revoked
that’s a very simple definition of OAuth.

Some key terms to know before we proceed further.

  • resource owner: When the resource owner is a person, it is referred to as an end-user.
  • resource server: The server hosting the protected resources, capable of accepting and responding to protected resource requests using access tokens.
  • client: An application making protected resource requests on behalf of the resource owner and with its authorization.
  • authorization server: The server issuing access tokens to the client after successfully authenticating the resource owner and obtaining authorization.

How we implement this? Can you show me an example?
Here are the steps:
oauth3

  1. This is just a url redirection to the picassa.com from the client(printfast.com) with some query parameters associated with it.
  2. The previous navigation takes you to picassa.com, here you logs in to your picassa account(if your not already) and then your asked to grant access to the required resource.
    NB: you can give your username/password to picassa ofcourse, thats the actual resource server 🙂
  3. The Authorization grant gives you an authorization code,you get it on the call back page. Now we need do an HTTP request to the authorization server(picassa.com) in the process of getting the actual access token.
    NB: this call happens server to server automatically when your server get the authorization code.
  4. The previous HTTP request results a response finally have the actual access_token and the expiry time.
    NB: after this step, the authorization server gives you a token that has a less life. You can exchange the token with the refresh token(thats has longer life) that you finally stores back to you database, which can be used for the later calls as well, if required. 🙂
  5. The access_token which we can use to call our protected resources hosted on picassa. We do GET/POST/PUT ….(as necessary) HTTP calls to the resourse server.
  6. The resource server gives you the actual resource. 🙂

OAuth an example
Lets take the example of google. Things to configure before we do actual calls.

  1. Register Application: Every third-party application needs to register with OAuth provider(eg: google.com), there you get client_id and client_secret. Go to https://code.google.com/apis/console and to API Access create and web application and put the callback url.
  2. NB: the callback url is needed for the OAuth provider because they need to know where to send the authrization code.
    NB: the callback url is needed for the OAuth provider because they need to know where to send the authrization code.
  3. You get clint_id & client_secret:
    NB: dont share your client secret to else.
    NB: dont share your client secret to else.
    That’s it, setup is done, we are ready to move forward.
  • Step 1: Obtain an Authorization code:
    Request:

    URL: https://accounts.google.com/o/oauth2/auth
    Method: GET
    Parameters:
        client_id=your client id here(we get from the API console)
        response_type=code(this means we are wanting the Oauth provider to send us authorization code)
        redirect_uri=http://localhost/callback.php(to listen the callback process)
        scope=https://www.googleapis.com/auth/userinfo.profile+https://www.googleapis.com/auth/userinfo.email
    

    (we can provide third party apps granular permission, can add more and more scope appending with + at the end)

    oauth6
    oauth7

    Response: To the callback page, that has code as GET parameter

    http://localhost/callback.php?code=4/apiW66nrpw1_7qid63Xie7gKrghW.oqUY_kaYjgIcgrKXntQAax3QfaFtegI
    

    NB: if user didn’t accepted the permission then the server returns error GET parameter like #error=access_denied

  • Step 2: Obtain an Access Token from the Google Authorization Server:
    We use authorization code(code inshort)to do a HTTP POST request and get the access token. This step happens at the back end server(client server) to server(authorization server) so there is no scope that your data get pinched.
    Request:

    URL="https://accounts.google.com/o/oauth2/token";
    Method: POST
    Parameters:
        code=the authorization code($_GET['code'])
        client_id=client_id_here
        client_secret=client_secret_here
        redirect_uri=http://localhost/callback.php
        grant_type=”authorization_code”
    

    NB: If you notice client secret is needed here, that is only known to Oauth provider.

    Response: Server return back in json the access_token & expiry

    {
      "access_token": "ya29.AHES6ZTiwVu0GvdMxX5SBPXEbbGN2diesFEPbuNmRVLUb54c9wjiEw",
      "token_type": "Bearer",
      "expires_in": 3600,
    }
    
  • Step 3: Send Access Token to an API(Obtaining User Profile Information)
    Lets see if the access_token at all working.
    Request:

    URL: https://www.googleapis.com/oauth2/v1/userinfo
    Method: GET
    Parameters: 
    	access_token= accessToken(that we get on the step 2)
    

    Response: Basic profile details(JSON)

    {
      "id": "103775319893123886481",
      "email": "avishekk111@gmail.com",
      "verified_email": true,
      "name": "avishek kumar",
      "given_name": "avishek",
      "family_name": "kumar",
      "link": "https://plus.google.com/103775319893123886681",
      "picture": "https://lh5.googleusercontent.com/-	KSQcWLL0TTE/AAAAAAAAAAI/AAAAAAAAAGc/3_xKjh-bp_8/photo.jpg",
      "gender": "male",
      "birthday": "1988-09-12",
      "locale": "en-GB"
    }
    

Hurray we did it :).
Here is the link to sample PHP code to get start with OAuth.
https://github.com/mfsi-avisheks/OAuth-2.0-sample

OAuth made easy

Why should we avoid unnecessary indexing to a table?

We have heard this many-times, a table needs to be indexed for faster look-up so why should we avoid it? And the next question comes in mind is, how do we know if an indexing is unnecessary?

Lets make these points clear,

  1. For every new row insertion to table apart from adding a new row values to the data file associated to the table, mysql server has to insert new row to the index file as well. And so this is obvious, processing of index file every time for new row is an extra overhead for the mysql server. This is fine with the columns where we can’t compromise the column being indexed but for other columns where its unnecessary, should be avoided. (The same is the case for update and delete statements).
  2. And there are consequences with this, the space required for storing index file will be bigger consequently processing time for any select statement will be higher.
  3. Mysql caches the indexes, so smaller is the index size more indexes would fit into the cache memo faster will be the query processing and lesser will be the disk IO.

Now come to the second part, how do we know if a column being unnecessarily indexed

  1. Don’t make indexes if we don’t need them. Like if columns never used in WHERE clause or ORDER BY or GROUP BY, DONT add indexes to those columns. Its never needed.
  2. And even if a column used in WHERE/ORDER BY/GROUP BY clauses does not necessarily mean it needs to be indexed. Don’t add indexes to a column which returns a large percentage of the records in the table, it will be just as fast to scan the table as to incur the overhead required to process the index. As a consequence, an index on a column that has very few distinct values is unlikely to do much good. For example is_deleted columns ENUM(Y,N) or gender column ENUM(M,F) the values are roughly evenly distributed such that a search for either value returns about half of the records. In this case, an index on the column is unlikely to result in faster queries.
  3. Avoid creating multiple indexes that overlap (have the same initial columns). This is wasteful because MySQL can use a multiple-column index even when a query uses just the initial columns for look-ups.

Eg: we have an multiple column index like Index(column1, column2) MySQL can reuse column1 being indexed even when query just uses the column1 for processing. (NB: not the case with column2 alone)

Hope this help 🙂

Why should we avoid unnecessary indexing to a table?