We consider a connected network with n nodes, each of which has access to a local function. The objective of all nodes is to minimize the aggregate cost function. In this paper we propose a distributed Newton-like method for minimisation of the objective function which exploits the specific structure of the penalty reformulation. The Hessian matrix is approximated by its diagonal part, taking the advantage of the problem structure, which allow us to compute the inverse in a distributed manner. The remaining part of the Hessian is used to correct the right hand side of the quasi Newton equation, to preserve as much of the second order information, as possible. The method exhibits linear convergence under a set of standard assumptions for the functions and the network architecture. The numerical results confirm the efficiency of the proposed method.