alphamale93yo. how do I direct godaddy domain to aws instance application running on port 3000?
gholmsThe easy way is with ELB.
alphamale93dominant! ill look into it
alphamale93is it more expensive tho
alphamale93or is it free?
gholmsIt is more expensive.
gholmsIf you want a single-instance solution that won't cost you anything, use an http daemon on the instance that proxies to your web app.
gholms(assuming it's a web app)
alphamale93it is. idk.. I guess I can’t find the /etc/httpd/…. on my amazonlinux
gholmsYou probably need to install the package, then.
alphamale93waht is it currently using, though?
gholmsYou tell me. It's your app. :)
alphamale93I can connect to my node.js app on port :3000… is that the only thing?
alphamale93like there’s no underlying service doing it?
gholmsThat's node itself.
gholmsYou generally put a more fully-featured web server in front of that.
gholmse.g. apache httpd, nginx, iis
gholmsThere are tons of tutorials out there for that.
alphamale93tlooking at it now!!!!!
pluszakIt seems that m1.small instance storage is slower than the standard drive, how come?
Ove_m* seems slower than anything else
Ove_m3.medium was really slow for me. Using t2 with single core was faster.
pluszakOve_: t2 can burst to be faster than m3.medium, sure. But m3.medium can be working at 100% all the time
Ove_pluszak: It was faster in all ways.
Ove_Not only with bursts.
Armaysshello i put hosted port 80 and containerport 3000 in my front-nginx, because my container is exposed on 3000, and i linked it to my back
ArmayssWill my back and my front communicate ? my front woks on 3500, and sends to localhost:5000 to my back
Armaysson elastic beanstalk
saijuHi All,
saijui need to add an ebs volume to a linux machine with command ( aws ec2 create-volume --size 1 --availability-zone ? --volume-type gp2). But i don't know on which availibility zone i should select .
narcansaiju: the same of your ec2 instances
saijunarcan: I don't have the details of ec2 instances . i just have a login to the server and this instance has IAM role to create volume but the availibilty zone is unknown to me
Takumosaiju: you can query this info from the ec2 instance (no iam permissions required)
saijunarcan: though apache is running, i am not the getting the the region details from the meta data (curl http://localhost/latest/dynamic/instance-identity/document|grep region|awk -F\" '{print $4}')
Takumosibiria: metadata url is at
saijuTakumo: what would be the command to query this.
Takumoreturns the AZ
Takumoe.g. eu-west-1b
Takumoso you could put that in $() or `` to use in your create-volume command
Takumoaws ec2 create volume --size 100 --availability-zone $(curl --volume-type gp2
Takumonow for my issue :P
TakumoHi all, I'm having a bit of trouble with security groups and ipv6 -- I've got our office's ipv6 cidr ( 2001:8b0:fb63::/48 ) -- but when I add this to a security group it doesn't seem to work
Takumo(if I call a service like I get an IP 2001:8b0:fb63:7872… )
saijuTakumo: thanks that did work.
Armaysscan we use a config.json in the authentication of the dockerrun for an amazon private registry ? or is the token temporary ?
TakumoArmayss: the token is temporary, you have to log in using the commands given by the ecr get-login API
TakumoI think the tokens last about 2 hours
Takumobut the docs should specify
TakumoIIRC AWS didn't want to "break" the standard docker registry login process flow
ArmayssTakumo, can we use the authentification for amzon private registries in dockrrun ?
Armayssor shold i use another private registry ?
Takumoshould be able to, ecr's get-login does return standard docker registry login commands
Takumowould be nice if it could just give you the token instead of the full command though
dwtsHello guys, quick question, I want to make a request for more EIPs so I'm filling this limit increase form...Is the "New limit value" the field to put the amount of new EIPs I want to get?
Armayssi have a unknown volume error in ELB
Armayssthis the folder of my source code, why ?
pluszakArmayss: "unknown volume error"? That's exactly what it says?
ArmayssService:AmazonECS, Code:ClientException, Message:Unknown volume 'formation-back'.,
pluszakdwts: sounds more like a total you want to have
dwtspluszak: thanks
Armaysspluszak, i created two images, one for the front and one for the back
Armayssin my dockerfile i go to "formation-back" and then i npm install
Armayssthen i go back to /src
Armayssin my source volume i specify then "formation-back" in the source volume
Armayssfor the mountpoints of my dockerrun file
ayogiguys, what is the policy to allow "Launch/shutdown any existing instance"
pluszakexisting instance?
ayogipluszak: yeah the user can stop existing instances, launch instances but can not terminate them
Armayssshould i create one application with two environments for my front and my back or one application with one environment with multi container ?
Armaysswhat is the difference ?
pparihi everyone..i need some help with aws s3
crawlerIs it possible to assign a default route in ec2 instance from another subnet, for example in one vpc i have 2 subnets and, and one ec2 in each subnet with private ips, lets say and Is it possible change default route in second instance like this: route del -net && route add -net mask gw
nirvanko 13. On the Event sources tab, choose Add event source.
nirvankoAny idea where I can find that event tab after I create a lambda function?
nirvankoCan't find it.
nirvankoI see Code / Configuration / Triggers / Monotoring tabs
nirvankoNo event tab
pparihi all
pparii need little help with aws s3
thehunt33rppari: What do you need ?
thehunt33rmaybe I can help you
pparii am trying to use my s3 bucket for website hosting
thehunt33rbtw you should directly post your question
pparisure is my domain. and my app is on
pparihow to setup aws s3 bucket and route 53 for the following
pparii am not seeing my bucket in alias in route 53. :(
Takumois the s3 bucket set up for website hosting?
pparii dont have anything to show in
thehunt33ryeah you have to enable your bucket for website hosting
pparii enabled it
thehunt33rhave you followed this doc : ?
thehunt33rcan you access to your app with your bucket url ?
ppariit say my bucket has to be and in route 53 i need to setup
pparii can setup in my route 53 hosted zone. but in alias my bucket is not shown
ppariif i host my app on then it show. but i have multiple apps. say app1, app2
ppariand one can visit using and
thehunt33rso those are folders in your bucket right?
ppariyeah you can say so
ppariif i create a bucket called
pparibut my app will be inside a folder in that bucket
pparii have done that
thehunt33rhave you configured your bucket permissions?
ppariso if i visit i can access the document
pparibut how do i setup the route for it
theShirbinybut a small js to redirect to your app directory
theShirbinygoogle javascript redirect
pparino cannot do that
Armayssi have ECS task stopped due to: Essential container in task exited.
Armaysswhereas i have a CMD in my container
sibiriacan anyone confirm that for Aurora, not only do you pay for two instances (obviously) but also each instance is twice as expensive when it's a "multi-az" instance
nezZarioyes .. reliability costs
nezZarioour's is about $1,200/mo
sibiriait doesn't make any effing sense...
sibiriapay for two instances when multi-az, but the idea that each instance is twice as expensive on top of that
sibiriais it possible to shut the replica down in a multi-az aurora cluster?
sibiriaand, is it possible to add a failover-replica again at a later point
nicktoddIs it possible to have a nested namespace in Cloudwatch metrics? So you could have A/1 A/2 and B/1 B/2?
cloudyMoonnicktodd: idk, but our cloudwatch metrics expert should be here in 20 i can let you know
nicktoddcloudyMoon: Great. Thanks!
cloudyMoonmy thought is no, but i have only used alarms and events :/
nicktoddYeah, I’ve tried a few things but I’ve not got it to work so I guess its a no.
nicktoddcloudyMoon: I think I understand where I’m going wrong. I need to set the namespace to cover one particular metric and then define things under it uding the ‘Dimensions’ attribute.
nicktoddThat works as I would like. Thanks :)
awilkinsHello, question : I'm really struggling with inconsistent behaviour of ALB ("Application" Elastic Load Balancer, not Classic)
awilkinsThe health checking is rating one of my instances UnHealthy even though it works fine (can shell into it, can curl the healthcheck page to the console)
awilkinsHave 2x instance 1 each in Availability Zones a and b
awilkinsALBs are in 2x public subnets, instances are in 2x private subnets
awilkinsWhat I don't get is i) It works fine with ELB Classic ; I have kept the same ACLs and security groups for the ALB setup
awilkinsii) I've had periods where it all worked fine and both instances were "healthy" but most of the time it's just one... or none
awilkinsWatching the logs and healthchecks from AZ b are landing on the instance in AZ a (and when traffic to AZ b worked, vice versa)
awilkinsWith classic, healthchecks from the ELB in both AZs end up on both servers
kgirthoferwhat are you health checking on
awilkinsA special healthcheck path on the main app port
kgirthoferif everything is working healthy - have you tried creating a new ALB
kgirthoferand checking that
awilkinsI'll give it a go manually - all this is Terraform'd
richidHaving some issues with rolling out new ECS cluster instances and wanted to see if I'm going crazy. It seems like setting the instances to DRAINING doesn't respect the min/max healthy threshold set for ECS service deployment.
richidThat is, setting an instance to DRAINING immediately kills the task(s) before ensuring they are healthy on another instance, so you end up with an outage
kcarpenterAnyone know if its possible to search an S3 bucket for a file name if I don't know the full key?
kcarpenterI know where it should be, but it ain't there :-/
richidIn my example, I have 2 instances in the ECS cluster with 1 task running and setting DRAINING on the instance where the task is running causes a 20 second period where the task isn't running on either instance
kcarpenterNever mind - found it...yay "eventually consistent"
awilkinsOk, manually creating an additional ALB creates the same result.. instance "b" is healthy, instance "a" isn't. Only one difference - the healthchecks from the new ALB are coming from subnet b instead of subnet a
awilkinsSo I guess this rules out egress from thee public subnet the ALBs are in as a problem
awilkinsACTION has a horrible thought
awilkinsOk, it's not the instance firewall
amcmrichid check your service event log, is it able to launch a new task while the other is draining? I have my min set to 0 so /can/ kill the current before launching the new one, I get errors about couldn't place task for CPU before it goes about killing the draining task
awilkinsOK, now the original target group rates server "a" as healthy
awilkinsThe new manually created target group still thinks it's unhealthy
richidamcm: Ooo didn't think to look there and didn't know it existed. Let me check that, thanks
csmuleI know this isn't a linux forum, but is there such a thing as a /32 address for ipv6? Trying to limit my security group to a single ipv6, and evidently very green on ipv6.
watmmWondering if someone can confirm my suspicion that if autoscaling replaces an elasticbeanstalk instance it rolls back to the last successful application version deployed via the aws console (which jenkins triggers)
watmmWhat i think i'm experiencing is deployments in between done via the terminal and aws cli - are missing from the rollback
csmulenm, /128.
kcarpenterSon of a bitch. anyone else seeing issues with S3?
roberthlkcarpenter: Which region?
kcarpenterEast 1
kcarpenterGetting Content Length Mismatch errors
rdghm. just occured to me that if I'm using Cognito then I have no failover options do I
cloudyMoonplease be kidding, the rest of the dev ops team went off to a meeting with a joke about s3 dying again
kcarpenterFailed to load resource: net::ERR_CONTENT_LENGTH_MISMATCH
nacellecloudyMoon: nothing here about it:
watmmor twitter
nacellewhere even in the early moments of the s3 outage they put up a note saying they were noticing something, even if all their check marks were saying green (which they were)
cloudyMoonid trust twitter over the status thing after last time :P
kcarpenterhopefully for your sake just me
rdgactually that's a serious question.. if yuo're using Cognito.. how do you replicate in case the hosted region dies adn you need to failover
cloudyMoonidk, run dual cogito with bucket replication?
hspencer"The Amazon Cognito streams feature can be used to backup data. Given the nature of the way Cognito identity ids are allocated it is not feasible to replicate data across regions at the current time."
rdgnice thanks hspencer
rdgmy googlefu was failing
rdgugh that's almost two years old at this point though
chainzkcarpenter: could do a recursive listing and grep through results?
rdghspencer: found a node module where someone made a cognito backup.. last commit..4 months ago.. listed under TODO: create restore lol
hspencerrdg: damn
hspencerthere has to be something else
rdgdoesn't /have/ to be.. i feel like AWS has been releasing products with fewer HA features lately
rdglike they're releasing the main product before it's really cloud worthy to see if people use it.. then add in everything later
rdgmaybe i'm being jaded though
roberthlWell cross-region doesn't really fit in to the AWS HA model. It is an "add-on" for every service they offer, and usually half-assed
hspencerso Cognito Streams are actually Kinesis Streams
rdglike DDB streams
rdgi thought cross region was part of their HA model
rdgor maybe the marketers just like to pretend
roberthlNone of the services do cross-region properly without a lot of custom tooling. And to be honest cross-region is always going to be bespoke, in some cases it is legit running load in two regions simultaenously, in others just failover, and others sharding
hspencerrdg: honestly..looks like the only sane way i see it working, is to use streams to sync data to DynamoDB in another region..yet why do that when you can leverage dynamoDB with cognito
hspencerunless you wanted cognito to manage syncing the data
hspencerand not dynamodb
hspencerthat seems the be the tradeoff
hspencerbtw, not a cognito/dynamodb expert by any means..just using my googlefu to map out possible solutions
hspencerso if something is off here, please let me know
hspenceri don't mind being humbled..:)
rdgwhat do you mean cognito+ddb?
rdgI'm all for being humbled as well here, not much online help for this that I've found
hspencerfor example -
rdgk one sec i'll go read
rdghspencer: oh.. my bad.. when you mentioned ddb + cognito I thought you meant some sort of cognito->ddb dump as a backup or something
impermanenceIs deploying applications on ec2 via lambda an improper use of lambda? or impossible even? or am I hitting the nail on the head?
hspencerthe only way i am seeing to do backup dumps really is using streams
hspencernot sure how efficient that is
rdgyou can use the cli to dump the data but it won't dump passwords
rdgfrom what i can tell
rdgwow the npm script that'll do the backup.. makes it look like you have to use the cli and download via pagination
rdgthat's rough
hspenceryea, that sounds like some serious surgery
hspencertheres gotta be a more less invasive procedure..just gotta be
rdgor someone said "fuck it. the cloud never fails"
rdgnot thinking someone might accidentally kill a few too many S3 servers
sathed_I've got a Jenkins server running on EC2. Lately, it's been freezing shortly after starting the daily jobs in the morning. Today, I noticed something interesting... I'm getting a TON of DHCPREQUESTS ( before the system freezes. I'm running the server on Amazon Linux - any thoughts on what would be causing that? I know, it's on the fence as to whether it's a Jenkins, AWS, or general Linux issue.
sathed_And it's not getting a DHCPACK. It's almost like it's unable to renew it's lease, but I don't know why...
nacellesathed_: thats not a ton of requests whatsoever (1 per minute?)
nacelleit looks like those are relayed too
nacellesince they're being sent to a unicast IP
nacelleoh i see, thats on the system itself
nacelleACTION is used to seeing the intermediary boxes :P!P#@&*
sathed_Well, it's every 30 seconds. You don't think that's excessive without receiving a response?
sathed_Yeah, this is on the system itself.
nacelleok, now I see the oddness in that, yes, thats weird.
sathed_Ok, good. I'm not crazy... :)
sathed_But not good... lol
nacelleis it truely frozen or just popped off the network?
nacellelike maybe it changed its ip on account of the dhcp rules/etc.
nacellemaybe try static IP to work around that
sathed_Without being able to connect to the instance, I don't know for sure...
sathed_It is using a static.
nacelleif its static why is it running dhclient?
nacelleI think you can get the console logs... it might say
sathed_Oh, sorry. The public is static. The local is not - that's a good thought though.
nacelleif mean, if thats your issue for the moment, eliminate dhcp and then see if jenkins stays up, etc.
sathed_Yeah, I'll give that a shot... I didn't think about that.
sathed_Thanks nacelle.
impermanenceIs deploying applications on ec2 via lambda an improper use of lambda? or impossible even? or am I hitting the nail on the head?
roberthlLambda is a general purpose tool, so it sure is possible. The one limitation you might encounter is that Lambda functions cannot take more than 300 seconds to execute.
impermanenceroberthi: oh, I see.
sathed_impermanence: I wouldn't say it's improper. And it is possible. But like roberthl said, there's a execution time limit. Depending on what you're doing, it may make more sense to use Lambda to create a CloudFormation Stack, Opsworks Stack/Layer (with Chef), etc.
impermanencesathed_: probably going to stay away from confmgmt tools like puppet, chef. currently using and heading out of that. looking to basically: trigger => lambda => cloudformation => thing
sathed_impermanence: Yeah, we do a fair bit of that. I'd say you're on the right track.
sathed_Can I ask why you're moving away from those tools? I ask because we're going in the opposite direction and I'm not a big fan of Chef (Ruby - yuck!)
aesthetikFinish Quest and get $110!
aesthetik- Make 1,5% Daily profit !
aesthetik- Invite a friend and get 1$ as a gift !
aesthetik- Comissions of 10% !
kcarpenterWho are you guys using for Domain registration. Just tried to renew 6 domains at Realized it's charging damn near $50 a domain/year
kcarpenterThat seems insane to me
roberthlNew domains I use AWS Route 53 Domain Registration, older domains Gandi directly (which Route 53 resells)
kcarpenterIs Route53 good enough to be a primary registrar of domains, ever stuff that isn't served off of AWS infrastructure
roberthlAbsolutely, you can set your own nameservers on Route 53 registered domains
cccyRegeaneWolfeHow about Namecheap?
cccyRegeaneWolfeIs Route 53 cheaper or more reliable?
kcarpenterShit part is pulling over all the domain settings I guess.
roberthlcccyRegeaneWolfe: Well, if you use AWS then it has already established trust, which is in short supply with most domain registrars.
gholmsSort of. It's still gandi doing the legwork.
roberthlWell hopefully AWS have done their due diligence. Gandi has a pretty good reputation as far as domain registrars go.
shalokIs there any way to cloudwatch plot the CPU utilization of all instances in an auto scaling group?
shalokI don't want an aggregate across all instances in the group, I want one line per instance.
djmaxI have an outbound socket connection to a server on the other side of an ipsec VPN tunnel in AWS. It times out after 30 seconds of inactivity, and after looking at the wireshark on the other side, the close comes from AWS. Any ideas why?
fission6how can i tell what the IP of an ec2 is
cloudyMoonin the ec2 console
cloudyMooncick on the instance and it comes up on the bottom of the page
cloudyMoonyou could use this too
cloudyMoon^ @fission6
DannyBHey, not the normal question, but I am getting repaid by my company for AWS bills I've accrued in tests. Its actually for 8 months worth, but I'd prefer not to have to print and transfer 8 different monthly bills. Is there a way to just get a single yearly summary, or a lifetime amount spent document?
cloudyMoonidk, but you can call those reports with cli so you could proly just script it
cloudyMoonCan you use wildcards in resource names? like resource: "arn:aws:region:accountid:parameter/cloudyMoon*"
roberthlYes, in IAM policies you can
cloudyMoonok.. then thats not the problem D:
cloudyMoonoh.. hey, looks like they totaly ignored the ssm prefixes
bethgeHi Lads. Big AWS fan. How would I restrict access to a folder in S3, such that only a certain user of my Cognito UserPool would have access?
amcmPut that user in a group with an access policy for the bucket?
bethgeThe folders have like personal directories, would that still scale?
bethgehave -> are
bethgeLike, one group per user, would that work?
ploshyYou can only have 100 IAM groups in an account
bethgeAh, ok, darn
ploshyAnd only 25 groups per cognito user pool
bethgeMy initial idea was to check in a AWS Lambda function, if the user of the request is the "owner" of the folder, but then I would have to route each s3 request via lambda
ploshyWell, also
ploshyDon't users in Cognito assume a role in the AWS account?
ploshySo every user in the group will have the same permissions
bethgeYeah exactly
ploshyWhat I'm getting at is that every user will appear to be the same person, from the perspective of Lambda or S3
bethgeOh, yeah, I do get the username in the lambda function, so at least I can check such that superman only gets access to the folder /users/superman
ploshyThis looks like it might have what you want
ploshyDown at "S3 Prefix"
ploshy"${}" seems to get the user id
bethgethat sounds just like what i need!
bethgeHoly smokes. mind: blown. I will try it out. Thank you a millon @ploshy !
amcm:sub is the UUID key right, not the actual user name, but should be what you want any way. Wouldn't want "joe" to delete his account, and a new "joe" register and have access to the old files
ploshyI agree, tho that's a vulnerability in IAM rn
ploshyAn IAM user arn is tied to the (mutable) user name.
ploshySo IAM users that can change their names can change their arns
bethgeAh, thanks for clarifying. I was actually using usernames.
amcmbethge identity pool != user pool. But you can use a userpool as an auth source for an identity pool
bethgeAh, i see
amcmUser pool usernames are immutable
amcmBut still can have the recycled name problem
bethgeQuick follow up on cognito: If I allow users to be cats and dogs, but with the same login form, I would put them into the same userPool. On signup users choose cat/dog, but since it needs to be writeable for them choose, the app could nefariously allow them to later change from a dog to a cat, is there any way to avoid taht ?
panzonHi, I have a doubt that maybe some of you can solve. Do you know if I attach a volume to my ec2 instance. Do I have to take care about that volume backup? Do you know if aws warranty that nothing gonna happen to my attached volumes?
cliluwpanzon: Yes, you can attach an EBS volume to an EC2 instance. You are responsible for backing up that EBS volume but Amazon makes it very easy with EBS snapshots. No, Amazon has no warranty for EBS volumes.
panzoncliluw, thank you, and what about the s3 solution?
cliluwpanzon: What do you mean by S3 solution?
panzonI am searching something that amazon backup continously, in order to avoid all that management in our side
cliluwpanzon: If you back up to S3, you don't have to worry about losing it. S3 is advertised to have 11 nines of durability.
panzonI have heard that aws has an s3 product that you can attach as a volume in your ec2 instance and you not require to take care about back uping that data, because it will be always available... at most you can detach it and attach to another instance
panzonbut data will never get lost
cliluwpanzon: That sounds like Storage Gateway. Unfortunately, I am not familiar with that product as I've never used it.
panzoncliluw, is it true? or maybe I have a wrong idea in mind
cliluwpanzon: Even for S3, Amazon only says that it has eleven nines of durability - they never say data will never be lost. If you go to, you can see that there have been a few occasions where people have lost data they've stored on S3. However, it is exceedingly rare.
matthewadamsanyone know of an aws cli command similar to `aws emr create-default-roles` except that creates default EMR-managed security groups, like those named "ElasticMapReduce-master" & "ElasticMapReduce-slave" when using the EMR console?
matthewadamsthere's no `aws emr create-default-security-groups`...
panzoncliluw, your link was really useful my volume should be used to store Database information
matthewadamsok, so I asked on SO:
panzonI think that at the end I have to deal with backing up mechanism anyway
Mooniacwhat happens in glacier when you upload the same archive again? Will it replace the one that is there, or drop it, or have both in the vault?
manpearpighi, i had a quick question regarding dynamodb and setting up the primary/sort keys
manpearpigis this the right place to ask?
offby1I might even be able to answer it
HailwoodHey folks, does anyone know if using cloudformation it's possible to add a a NotificationConfiguration for a lambda function to an existing bucket that isn't part of the stack?
roberthlI can't see how that would be done
HailwoodDarn, we're trying to write a cloud formation script that takes two bucket names as parameters input/output and sets up a Lambda function with a trigger on the input bucket that on object creation triggers this lambda function to manipulate and save the manipulated object in the output bucket.
HailwoodBut we need both buckets to already exist for our usecase.
HailwoodAny ideas on a better way of doing this if a cloudformation template isn't possible?
wshakesTrying to get SES to put emails in S3 Bucket... Sending email smtp with the creds provided no problem and domains are approved and policy is applied to the bucket. What am I doing wrong?
wshakesHad to setup mx records. Doh./
PickAndMixHi AWS peeps, how can I set a cookie in the requests header using API Gateway? I tried setting the HTTP Header 'Cookie' but it's not being showed.
PickAndMixI could do a 'Set-Cookie' in my response header though.