Ubuntu Server Basic Libraries [ with SSL, nokogiri, postgress and nodjs ]

To install basic tools and nokogiri libraries

sudo apt-get install git build-essential patch ruby-dev zlib1g-dev liblzma-dev
sudo apt-get install -y libssl-dev libreadline-dev

Uglifier required library

sudo apt-get install nodejs

Postgres required library

sudo apt-get install libpq-dev

Install and Setup Nginx with Puma
Setup Nginx with puma on Ubuntu, with Godaddy

Advertisements
Tagged with:
Posted in Nokogiri, postgres, Ubuntu

Create and integrate SSL certificate in Rails app Using [ Godaddy + Nginx(1.8) + Puma + Ubuntu Server(14.04 LTS) ]

Login to your server by ssh and then:

  1. Create file YOUR_RAILS_APP_DIRECTORY/config/puma.rb
    Below is puma.rb file content:

    
    
    #!/usr/bin/env puma
    
    directory '/home/ubuntu/YOUR_RAILS_APP_DIRECTORY/public/'
    rackup '/home/ubuntu/YOUR_RAILS_APP_DIRECTORY/config.ru'
    
    environment 'production'
    daemonize true
    
    pidfile '/home/ubuntu/YOUR_RAILS_APP_DIRECTORY/tmp/pids/puma.pid'
    state_path '/home/ubuntu/YOUR_RAILS_APP_DIRECTORY/tmp/pids/puma.state'
    stdout_redirect '/home/ubuntu/YOUR_RAILS_APP_DIRECTORY/log/puma.log'
    threads 2, 5
    bind 'unix:///home/ubuntu/YOUR_RAILS_APP_DIRECTORY/tmp/sockets/puma.sock'
    workers 2
    
  2. Generate CSR certificate

     openssl req -new -newkey rsa:2048 -nodes -keyout SITE_DOMAIN_NAME.key -out SITE_DOMAIN_NAME.csr

    Note: For instance site domain is facebook.com then you will place only facebook.key and facebook.csr

  3. .key file will be used in nginx configurations.
    Copy .csr file’s content and then paste its content to godaddy CSR field.

    After generating ssl certificate using godaddy, download it. Then upload that downloaded zip to Ubuntu Server using scp command.

  4. Unzip the folder there will be two .crt files in it. Chain those files using this command.

    
    cat file_name.crt  file_containing_bundle_in_name.crt > some_name.chained.crt
    

    Note: Place them .chained.crt file in that order that file containing bundle in its name comes after like above command.

  5. The resulting chainned file should be used in the ssl_certificate directive:

    
    server {
        listen              443 ssl;
        server_name         www.example.com;
        ssl_certificate     some_name.chained.crt;
        ssl_certificate_key file_generated_by_openssl_command.key;
        ...
    }
    
  6. Create file /etc/nginx/conf.d/your_app_name.conf

    
    ###########################################SETUP SSL ON NGINX USING PUMA############################
    
    #upstream name should be same name passed in "proxy_pass" option (like in my case 'cuhivetech')
    upstream cuhivetech {
        #puma.sock file will be generated by puma when it will start, you have to bind below mentioned file path in YOUR_APP_PATH/config/puma.rb
            server unix:///home/ubuntu/YOUR_APP_DIRECTORY/tmp/sockets/puma.sock;
    }
    
    #Redirect http requests to https
    server {
        listen 80;
        return 301 https://$host$request_uri;
    }
    # HTTPS server
    server {
        listen 443;
        server_name site_domain.com;
    
        root /home/ubuntu/YOUR_APP_DIRECTORY/public;
            try_files $uri/index.html $uri.html $uri @app;
    
    
        ssl_certificate /home/ubuntu/GODADDY_CERT_DIRECTORY/cuhivetech.chained.crt;
        ssl_certificate_key /home/ubuntu/cuhivetech.key;
    
    
        ssl on;
        ssl_session_cache  builtin:1000  shared:SSL:10m;
        ssl_protocols  TLSv1 TLSv1.1 TLSv1.2;
        ssl_ciphers HIGH:!aNULL:!eNULL:!EXPORT:!CAMELLIA:!DES:!MD5:!PSK:!RC4;
        ssl_prefer_server_ciphers on;
    
    
        proxy_read_timeout  90;
    
    
        location @app {
            proxy_set_header        Host $host;
            proxy_set_header        X-Real-IP $remote_addr;
                  proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;
                  proxy_set_header        X-Forwarded-Proto $scheme;
      
                proxy_pass    http://cuhivetech;
                proxy_redirect      http://cuhivetech https://cuhivetech;
          }
    
    
    }
    
  7. Now restart Nginx server using command:

    sudo service nginx restart
  8. Go back to godaddy.com

    • goto domains
    • Click on “Manage DNS” of your specific domain
    • Click on “DNS Zone File”
    • Edit entry in “A (Host)” and add your IP address in “POINTS TO” field.
  9. Now start Web server using command:

    bundle exec puma -C config/puma.rb 
  10. It will take some time to reflect changes. Now You can access site using your domain.

godaddy ssl certificate guide
godaddy request ssl certificate

Tagged with: , , ,
Posted in amazon, DNS, Godaddy, Nginx, production, Puma, Ruby on Rails, Ubuntu

mongoid group by query


stages =  [
   { "$match" => { column_name: "Value" } },
   { "$group" => {  
             "_id" => { 
                  "name_of_your_choice"=>"$column_name",
                  "year" => { "$year" => "$column_name" },
                  "month" => { "$month" => "$column_name" },
                  "day" => { "$dayOfMonth" => "$column_name" },
                  "hour"=> {"$hour" => "$column_name"}
              },
          },
         "get_avg_of_grouped_records" => { 
              "$avg" => "$column_name" 
          },
         "count" => { "$sum" => 1 }
      }

  }
]

@array_of_objects = ModelName.collection.aggregate(stages, {:allow_disk_use => true})

Query stages
$match stage will apply conditions on query before fetching, so place it before other stages.
$group stage is used to group collection data, “_id” is used to set options on which you want to group records.
Error: A pipeline stage specification object must contain exactly one field. (16435)
As stages variable is array and you need to specify separate hash for each stage in it. Like in above query $match and $group are two stages both are placed in separate hashes and are separate elements of stages array.

Tagged with: , ,
Posted in mongoid

Weka create instance and training data using arff

Read training data from arff file:


public static Instances get_instances_from_arff() throws Exception {
	BufferedReader breader = null;
	breader = new BufferedReader( new FileReader(System.getProperty("user.dir")+"/src/weka_usage/exploration_tracks.arff") );	
	Instances training_data = new Instances(breader);
	training_data.setClassIndex(training_data.numAttributes() -1);
	breader.close();	
	return training_data;
}

Create new instance with new provided attributes:



public static Instance create_instance(double[] attr, Instances training_data){
	// Create the instance	
	DenseInstance inst = new DenseInstance(4);
	//Instance inst = new DenseInstance(4);
	inst.setValue(0, attr[0]); //web
	inst.setValue(1, attr[1]); //db
        inst.setValue(2, attr[2]); //arrival rate																																																																																																					                               
	inst.setValue(3, attr[3]); // response time
	
	inst.setDataset(training_data); // assosiate training data with instance to help in its classification
	return inst;
}

build_classifier method:


public static AbstractClassifier build_classifier(String type, Instances data) throws Exception{

	if(type == "RandomForest"){
		RandomForest rF = new RandomForest();
		rF.buildClassifier(data);
		return rF;
	}
	else if(type == "MultilayerPerceptron"){
		MultilayerPerceptron rF = new MultilayerPerceptron();
		rF.buildClassifier(data);
		return rF;
	}
	else if(type == "LinearRegression"){
		LinearRegression rF = new LinearRegression();
		rF.buildClassifier(data);
		return rF;
	}
	else{
		//GaussianProcesses is default
		GaussianProcesses rF = new GaussianProcesses();
		rF.buildClassifier(data);
		return rF;			
	}
	
}
Tagged with: , , ,
Posted in Weka

Attach EBS volume as root to EC2 instance Amazon

Amazon EBS Device Naming Conventiona

Attach the volume to the existing instance by following these steps:

  1. Stop Your Instance.
  2. Create a snapshot of the root volume.
  3. Create a new volume using the snapshot. [In case of ubuntu name it /dev/sda1]
  4. Detach Amazon EBS old root volume from already stopped instance, by right clicking on old EBS volume.
  5. Reattach the new Amazon EBS volumes to the instance.
  6. Start your Instance.

http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-attaching-volume.html
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-using-volumes.html
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-add-volume-to-instance.html

Tagged with: , , ,
Posted in amazon, EBS (Elastic Block Storage)

Mongo::Error::NoServerAvailable OR mongod stoped working rails OR Insufficient free space for journals, terminating

First I tried to remove mongod lock, but it did not worked:


sudo rm /var/lib/mongodb/mongod.lock
sudo service mongodb restart

Then I tried to change permission for “/tmp” folder


ls -lh /tmp
chown root:root /tmp
chmod 1777 /tmp
sudo service mongodb restart
tail -f /var/log/mongodb/mongod.log

It did not worked but guided me towards the problem.

Problem:


Insufficient free space for journal file
Please make at least 3379MB available in /var/lib/mongodb/journal or use --smallfiles
Insufficient free space for journals, terminating
now exiting #It means mongo is stopping itself
shutdown: going to close listening sockets...
removing socket file: /tmp/mongodb-27017.sock

Solution:

sudo nano /etc/mongod.conf

and add


storage:
   mmapv1:
      smallFiles: true

Now specify for mongod to “Use the Configuration File” using command below

mongod -f /etc/mongodb.conf

In other terminal tab open log file

tail -f /var/log/mongodb/mongod.log

Restart mongod

sudo service mongod restart

log file will contain “connection now open” if everything is fine.

Side Notes:
1. To check mongod install version

mongod --version

2. mongod.conf other options


# Where and how to store data.
storage:
  dbPath: /var/lib/mongodb
  journal:
    enabled: true
#  engine:
  mmapv1:
   smallFiles: true  #OPTION ADDED FOR SMALL FILES
#  wiredTiger:

# where to write logging data.
systemLog:
  destination: file
  logAppend: true
  path: /var/log/mongodb/mongod.log

# network interfaces
net:
  port: 27017
  bindIp: 127.0.0.1

http://stackoverflow.com/a/8479630/1222852
https://docs.mongodb.org/manual/reference/configuration-options/
Mongodb Small file options setting for different versions

Tagged with: , , ,
Posted in mongodb, mongoid

httperf sending requests


httperf --hog --server localhost --port 80 --wsesslog 1,1,req.txt --rate 1 --num-con 1 --num-call 1 --timeout 5 --add-header="Content-Type: application/x-www-form-urlencoded\n" --print-reply

To get reply of header or body only use this
–print-reply=header OR –print-reply=body

req.txt file content


/PHP/ think=2.0
/PHP/register.html

/PHP/sell.html                                                
/PHP/BrowseCategories.php method=POST contents="nickname=root&password=root"

/PHP/XYZ.php method=POST contents="nickname=root&password=root"

http://www.hpl.hp.com/research/linux/httperf/wisp98/html/doc003.html
http://www.mervine.net/performance-testing-with-httperf
http://www.hpl.hp.com/research/linux/httperf/httperf-man-0.9.txt
http://jairtrejo.mx/blog/2014/04/performance-testing-with-httperf
https://gist.github.com/FZambia/5599483

Tagged with: ,
Posted in httperf, performance testing

Rails production environment assets loading

In Gemfile add:

gem 'rails_12factor'

In config/environments/production.rb add


config.assets.compile = false #To stop run time assets precompile in production.
config.assets.digest = true  #To access assets which are precompiled and in their names have appendend digets by rails
config.cache_classes = true #allowing caching assets

In config/application.rb add

replace this

Bundler.require(:default, Rails.env)

with this

Bundler.require(:default, :assets, Rails.env)

Also add these lines


# Enable the asset pipeline
config.assets.enabled = true
# Version of your assets, change this if you want to expire all your assets
config.assets.version = '1.0'
Tagged with: ,
Posted in production, Ruby on Rails

Downloading Objects from Amazon S3 using the AWS SDK [API V2] for Ruby

Set bellow variables in your project or as environment variables or whatever way you wanted.


AWS_ACCESS_KEY_ID = 'S3 bucket access key id'
AWS_SECRET_ACCESS_KEY= 'S3 bucket secret access key'
AWS_REGION = 's3 bucket region'
AWS_BUCKET= 'bucket name'

Follow any of two given ways to download objects to your local system.


s3 = Aws::S3::Client.new
s3.list_objects(bucket: 'AWS_BUCKET NAME HERE').each do |response|
  response.contents.each do |obj|
    File.open("#{Rails.root}/#{obj.key}", 'wb') do |file|
      s3.get_object( bucket: 'AWS_BUCKET NAME HERE', key: obj.key , response_target: file)
    end
  end
end

s3 = Aws::S3::Client.new
bucket = Aws::S3::Bucket.new('AWS_BUCKET NAME HERE')
bucket.objects.each do |obj|
  File.open("#{Rails.root}/#{obj.key}", 'wb') do |file|
    s3.get_object( bucket:ENV[:AWS_BUCKET], key: obj.key , response_target: file)
  end
end

get_object-instance_method V2 documentation
Official AWS SDK GEM FOR RUBY
For More Interesting Things

Posted in amazon, S3

rbenv install 2.2.1 ruby not working

After running this command:

rbenv install 2.2.1

Faced error below:

Installing ruby-2.2.1…

BUILD FAILED (Ubuntu 14.04 using ruby-build 20150928-2-g717a54c)

Inspect or clean up the working tree at /tmp/ruby-buil

Install ‘libffi-dev’ package using command below, then above command will work:

sudo apt-get install libffi-dev

Posted in missing liberaries, Ruby on Rails, Ubuntu
StackOverFlow
Categories
Archives