As most of you, I cannot also access to MongoDb instances from work, because they are running in different ports, but never on 80.
What I will create is a SSH tunnel to a remote EC2 instance in 3T Mongo Chef.
All steps
Click on Launch Instance:
Select Ubuntu:
t2.micro is enough for SSH
Add this user data:
#!/bin/bash -ex
perl -pi -e 's/^#?Port 22$/Port 80/' /etc/ssh/sshd_config
service sshd restart || service ssh restart
If the Perl is giving error or it is missing, you can use sed:
sed -i 's/Port 22/Port 80/' /etc/ssh/sshd_config
Standard stroge is enough
Add this tags (AutoOff is for later on, but add now)
Create security group with port 80 available
Review of all settings:
Create a new key pair and download to secure location, do not loose it. It is used to connect SSH.
Assign and elastic IP to avoid public IP changes (we will shut it down out of work hours)
Then assign the EC2 to this IP
In MongoChef now you should be connecting. Use the following settings:
Test the connection, see if everything is okay.
Now we will create a schedule to shutdown the server out of work hours. If you want to keep it always live, you can skip these steps.
Create a role with 2 policies:
Managed policies:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:*",
"ec2:Describe*",
"ec2:Start*",
"ec2:RunInstances",
"ec2:Stop*",
"datapipeline:*",
"cloudwatch:*"
],
"Resource": [
"*"
]
}
]
}
Inline policies:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"logs:*"
],
"Resource": "arn:aws:logs:*:*:*"
},
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:PutObject"
],
"Resource": [
"arn:aws:s3:::*"
]
}
]
}
Now create the function to shutdown/start the server
Create a new lambda function:
Skip the blueprint:
Create a function with runtime Python 2.7:
import boto3
import logging
#setup simple logging for INFO
logger = logging.getLogger()
logger.setLevel(logging.INFO)
#define the connection
ec2 = boto3.resource('ec2')
def lambda_handler(event, context):
# Use the filter() method of the instances collection to retrieve
# all running EC2 instances.
filters = [{
'Name': 'tag:AutoOff',
'Values': ['True']
},
{
'Name': 'instance-state-name',
'Values': ['stopped']
}
]
#filter the instances
instances = ec2.instances.filter(Filters=filters)
#locate all running instances
RunningInstances = [instance.id for instance in instances]
#print the instances for logging purposes
#print RunningInstances
#make sure there are actually instances to shut down.
if len(RunningInstances) > 0:
#perform the start
starting = ec2.instances.filter(InstanceIds=RunningInstances).start()
print starting
else:
print "Nothing to see here"
Select the role that is created for the function:
After creation, create a schedule with cron:
For the stop, create another lambda with this code:
import boto3
import logging
#setup simple logging for INFO
logger = logging.getLogger()
logger.setLevel(logging.INFO)
#define the connection
ec2 = boto3.resource('ec2')
def lambda_handler(event, context):
# Use the filter() method of the instances collection to retrieve
# all running EC2 instances.
filters = [{
'Name': 'tag:AutoOff',
'Values': ['True']
},
{
'Name': 'instance-state-name',
'Values': ['running']
}
]
#filter the instances
instances = ec2.instances.filter(Filters=filters)
#locate all running instances
RunningInstances = [instance.id for instance in instances]
#print the instances for logging purposes
#print RunningInstances
#make sure there are actually instances to shut down.
if len(RunningInstances) > 0:
#perform the shutdown
shuttingDown = ec2.instances.filter(InstanceIds=RunningInstances).stop()
print shuttingDown
else:
print "Nothing to see here"
Then create another schedule to shut down:
Later on, you can also create a SOCKS proxy using the same server. In ubuntu open a console and write this command: (copy you pem file to desktop)
ssh -p 80 -i Desktop/sshtunnel.pem ubuntu@52.52.51.51 -D1234
This will create proxy on localhost:1234
Then configure your browser, example firefox: