Wednesday, September 28, 2016

Windbg .net debugging most used commands

ctrl+D
load the dump file

k
check file

~* kb 2000
examine native callstacks

version
check versions of the dlls

.load sos
required for .net (extension)

.unload
to unload unwanted versions

.load sosex
loads the extended sos commands

~
list of threads

~* e !clrstack
gets all clrstacks of all threads to understand which one is managed

~49s
switchs to thread 49

!clrstack -p
analyze the current thread with parameters

!do 039ec48c
details on instance 039ec48c

kb 2000
show the current thread call stack without managed code

!syncblk
Shows which thread owns the lock
Index SyncBlock MonitorHeld Recursion Owning Thread Info  SyncBlock Owner
   97 000000a76acc6f08            3         1 000000a76ca31660 3b0  52   000000a7005d1a18 System.Object
  418 000000a76b2463a8            1         1 000000a76ca894f0 238c  37   000000a702f6f2f8 ASP.global_asax
MonitorHeld=3 is the threads waiting

!dlk
examines the deadlocks

!mlocks

!dso
show instances in thread

!mdso
objects in different view (with links)

!do 039ec48c
details on instance 039ec48c

!mk
Produces and displays a merged stack trace of managed and unmanaged frames.

!address -summary
show summary of mem

!eeheap -gc
get managed heap size

!dumpheap -stat
objects and sizes in heap

!refs 000000a705314c38
shows all references to the object

!mwaits
show all waiting threads

!strings
shows all strings in dump (might take very long)

Friday, May 20, 2016

Checking connectivity between servers

Use powershell:

(New-Object System.Net.WebClient).DownloadString("http://www.google.com")

Instead of google, you can put the IP based url of the other server, if you get "Unable to connect to remote server", then probably there is an issue with firewall/security groups or other connectivity.

Thursday, May 19, 2016

Connect MongoDb using SSH tunnel on port 80 Amazon EC2

As most of you, I cannot also access to MongoDb instances from work, because they are running in different ports, but never on 80.

What I will create is a SSH tunnel to a remote EC2 instance in 3T Mongo Chef.

All steps
Click on Launch Instance:


Select Ubuntu:


t2.micro is enough for SSH


Add this user data:
#!/bin/bash -ex
perl -pi -e 's/^#?Port 22$/Port 80/' /etc/ssh/sshd_config
service sshd restart || service ssh restart

If the Perl is giving error or it is missing, you can use sed:
sed -i 's/Port 22/Port 80/' /etc/ssh/sshd_config


Standard stroge is enough

Add this tags (AutoOff is for later on, but add now)

Create security group with port 80 available

Review of all settings:

Create a new key pair and download to secure location, do not loose it. It is used to connect SSH.

Assign and elastic IP to avoid public IP changes (we will shut it down out of work hours)

Then assign the EC2 to this IP

In MongoChef now you should be connecting. Use the following settings:

Test the connection, see if everything is okay.

Now we will create a schedule to shutdown the server out of work hours. If you want to keep it always live, you can skip these steps.

Create a role with 2 policies:
Managed policies:
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "s3:*",
                "ec2:Describe*",
                "ec2:Start*",
                "ec2:RunInstances",
                "ec2:Stop*",
                "datapipeline:*",
                "cloudwatch:*"
            ],
            "Resource": [
                "*"
            ]
        }
    ]
}

Inline policies:
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "logs:*"
      ],
      "Resource": "arn:aws:logs:*:*:*"
    },
    {
      "Effect": "Allow",
      "Action": [
        "s3:GetObject",
        "s3:PutObject"
      ],
      "Resource": [
        "arn:aws:s3:::*"
      ]
    }
  ]
}

Now create the function to shutdown/start the server

Create a new lambda function:

Skip the blueprint:

Create a function with runtime Python 2.7:

import boto3
import logging

#setup simple logging for INFO
logger = logging.getLogger()
logger.setLevel(logging.INFO)

#define the connection
ec2 = boto3.resource('ec2')

def lambda_handler(event, context):
    # Use the filter() method of the instances collection to retrieve
    # all running EC2 instances.
    filters = [{
            'Name': 'tag:AutoOff',
            'Values': ['True']
        },
        {
            'Name': 'instance-state-name',
            'Values': ['stopped']
        }
    ]
 
    #filter the instances
    instances = ec2.instances.filter(Filters=filters)

    #locate all running instances
    RunningInstances = [instance.id for instance in instances]
 
    #print the instances for logging purposes
    #print RunningInstances
 
    #make sure there are actually instances to shut down.
    if len(RunningInstances) > 0:
        #perform the start
        starting = ec2.instances.filter(InstanceIds=RunningInstances).start()
        print starting
    else:
        print "Nothing to see here"


Select the role that is created for the function:

After creation, create a schedule with cron:

For the stop, create another lambda with this code:

import boto3
import logging

#setup simple logging for INFO
logger = logging.getLogger()
logger.setLevel(logging.INFO)

#define the connection
ec2 = boto3.resource('ec2')

def lambda_handler(event, context):
    # Use the filter() method of the instances collection to retrieve
    # all running EC2 instances.
    filters = [{
            'Name': 'tag:AutoOff',
            'Values': ['True']
        },
        {
            'Name': 'instance-state-name',
            'Values': ['running']
        }
    ]
 
    #filter the instances
    instances = ec2.instances.filter(Filters=filters)

    #locate all running instances
    RunningInstances = [instance.id for instance in instances]
 
    #print the instances for logging purposes
    #print RunningInstances
 
    #make sure there are actually instances to shut down.
    if len(RunningInstances) > 0:
        #perform the shutdown
        shuttingDown = ec2.instances.filter(InstanceIds=RunningInstances).stop()
        print shuttingDown
    else:
        print "Nothing to see here"

Then create another schedule to shut down:

Later on, you can also create a SOCKS proxy using the same server. In ubuntu open a console and write this command: (copy you pem file to desktop)
ssh -p 80 -i Desktop/sshtunnel.pem ubuntu@52.52.51.51 -D1234

This will create proxy on localhost:1234
Then configure your browser, example firefox:

Tuesday, April 26, 2016

ES6 reduce function cheatsheet

[3,6,7,9,10].reduce((a,b,i,all) => {console.log(a + " " + b + " " + i + " " + all);return a+b;})

3 6 1 3,6,7,9,10
9 7 2 3,6,7,9,10
16 9 3 3,6,7,9,10
25 10 4 3,6,7,9,10
> 35

Wednesday, April 20, 2016

Cleaning up the publication target in proper way

We have some components marked as published to publication targets, but the publication target does not exist anymore.

This causes problem when you want to delete the pages that are published to this target (maybe because of data migration, they still think they are living on a remote server) :)

We cannot delete them, as they marked as published!

Possible solution:
First I played a bit with  the core services.

 <system.serviceModel>
    <bindings>
      <basicHttpBinding>
        <binding name="basicHttp" maxReceivedMessageSize="10485760"">
          <readerQuotas maxStringContentLength="10485760" maxArrayLength="10485760"/>
          <security mode="TransportCredentialOnly">
            <transport clientCredentialType="Windows"/>
          </security>
        </binding>
      </basicHttpBinding>
      <wsHttpBinding>
        <binding name="wsHttp" transactionFlow="true" maxReceivedMessageSize="10485760">
          <readerQuotas maxStringContentLength="10485760" maxArrayLength="10485760"/>
          <security mode="Message">
            <message clientCredentialType="Windows"/>
          </security>
        </binding>
      </wsHttpBinding>
    </bindings>
    <client>
      <endpoint name="Basic_CoreServiceDev"
          address="http://yourtridionserver/webservices/CoreService2013.svc/basicHttp"
          binding="basicHttpBinding"
          bindingConfiguration="basicHttp"
          contract="Tridion.ContentManager.CoreService.Client.ICoreService"/>
      <endpoint name="CoreServiceDev"
          address="http://yourtridionserver/webservices/CoreService2013.svc/wsHttp"
          binding="wsHttpBinding"
          bindingConfiguration="wsHttp"
          contract="Tridion.ContentManager.CoreService.Client.ISessionAwareCoreService"/>
    </client>
  </system.serviceModel>

Then I used this code to clean:
using (var client = new SessionAwareCoreServiceClient("CoreServiceDev"))
           {
               if (client.ClientCredentials != null)
                   client.ClientCredentials.Windows.ClientCredential = 
                       new System.Net.NetworkCredential(
                           ConfigurationManager.AppSettings["Username"],
                           ConfigurationManager.AppSettings["Password"]);
 
               client.SetSessionTransactionTimeout(60 * 30); //30 minutes
               Console.WriteLine("Please enter the tcm-id of the publication");
               var tcmid = Console.ReadLine();
               try
               {
 
                   Console.WriteLine("Started");
                   client.DecommissionPublicationTarget(tcmid);
                   Console.WriteLine("Completed");
                   client.SetSessionTransactionTimeout(60);
               }
               catch (Exception ex)
               {
                   Console.ForegroundColor = ConsoleColor.Red;
                   Console.WriteLine("Error: " + ex.Message);
                   Console.ResetColor();
               }
           }

The result was still a time-out!

At the end, running the powershell on the server was a better option: (tcm id is the publication target's tcm-id):
PS C:\Windows\system32> Import-Module Tridion.ContentManager.Automation
PS C:\Windows\system32> Clear-TcmPublicationTarget tcm:0-2-12345