代写_论文代写_ESSAY PhD团队论文、作业代写-不是中介 更加专业

EssayPhD是来自全球各地TOP名校的PhD自发组成的联盟,旨在为学弟学妹提供专业的英语论文写作指导及留学申请指导,为名解答论文及申请文书写作中的常见疑难问题。
论文写作答疑

EssayPhD学霸R语言作业代写rcode代写

时间:2018-03-23 15:34来源:EssayPhD团队 作者:admin 点击:

ESSAYPHD论文代写团队从事论文作业代写多年,Rcode代写一直都是我们的强项。团队学霸如林,下面由美国top20的计算机PhD为大家介绍一下r语言代写以及Rcode相关介绍。


R语言作为一门开源语言,其简单易学而且又免费的特点,在数据分析领域十分流行。除了统计系,现在一些社科类专业,比如经济学,社会学,政治学,金融学等领域都开始教学R的相关课程。R语言在业界也得到了充分的认可,比如华为,京东,豆瓣,Ebay,阿里等企业都有Rcode和Hadoop结合的数据部门。Rcode在量化投资,促销策略,推荐系统,计算广告,互联网金融,数据分析,甚至云医疗,天文,气象等可视化设计也都是基于R语言开发的。根据IEEE公布的编程语言排名,结合Google的搜索量、Google的趋势、Twitter的点击次数、GitHub的库、Hacker News的帖子等等12个指标的综合考量,R语言已经排名第五。


EssayPhD论文代写团队作为一个专业的PhD兼职平台,我们有专门研究数据分析的相关PhD,研究方向包括Statistical Computing,Numerical Optimization,Data Mining,Business Analytics,Generalized Linear Models,Bayesian Statistics,Machine Learning,Financial Mathematics等等。因此R语言代写(R code代写)是我们团队非常擅长的内容,如果你找市场上的代写中介,很有可能会被坑到,因为这些代写中介根本不懂你的需求,而是简单地把你的作业甩手外包给国内大学生。而你支付的高昂稿费绝大部分已经被代写中介抽成,这样还能期望有什么好的结果呢,因此R语言作业代写和数据分析代写,找EssayPhD团队是不会让您失望的。我们7*24小时在线,客服都是全球的PhD轮流值班,您下单咨询的时候就可以跟专业的STATS的PhD一对一沟通,让您的作业需求得到最充分的理解,轻松拿到高分。


以下为您展示我们R语言代写rcode代写的成功案例,客户的课程是非线性统计模型。

rcode代写

##非线性统计模型代写


library(faraway)

data(meatspec) ##加载数据

#data "meatspec" in library "faraway" is used to

#illustrate RR,PCR and PLSR.

# data size: 215*101, the 101th column is the response,

#while the predictors are the first to 100 columns.

#the first 172 rows are used as trainning dataset,

mm <- apply (meatspec[1:172,], 2, mean) ##数据预处理

mtc<-sweep(meatspec[1:172,],2,mm)

y<-meatspec$fat[1:172]

yc<-mtc[,101]

trainx<-as.matrix(mtc[ ,-101])


mm2 <- apply (meatspec[173:215,], 2, mean)

mtc2<-sweep(meatspec[173:215,],2,mm2)

testx<-as.matrix(mtc2)[,-101]

yt<-mtc2[,101]


##模型评估函数

# root mean square error (rmse) to measure the performance of each method


rmse <- function (x, y) sqrt (mean ( (x-y)^2 ) )



#1. LS estimator 线性模型拟合,专业统计代写


data (meatspec)

mt.lm <- lm (fat~.-1, data=mtc)

summary(mt.lm)$r.squared


#the fit of this model is already very good in terms of R2

#How well does this model do in predicting the observations in the test sample?

rmse (fitted(mt.lm), yc) # rmse for training sample


rmse (predict(mt.lm, mtc2), yt) # rmse for testing sample

#We see that the performance is much worse for the test sample


#Now, it is quite likely that not all 100 predictors are necessary to make a good

#prediction. In fact, some of them might just be adding noise to the prediction and we

#could improve matters by eliminating some of them

kappa(t(trainx)%*%trainx) #severer multicolinearity


#2. Ridge estimator 岭回归模型建立,专业R code代写


gridge <-lm.ridge (yc~trainx-1, lambda =seq(0,5e-8,1e-9))


matplot (gridge$lambda, t(gridge$coef), type="l",lty=1,xlab=expression (lambda),

ylab=expression (hat (beta)))

select(gridge)

abline(v=1.8e-8)


which.min(gridge$GCV)

ypredg <-scale(trainx, center=FALSE,scale=gridge$scales)%*% gridge$coef[, 19] +

mean(meatspec$fat[1:172])

rmse(ypredg,meatspec$fat[1:172])


ytpredg <-scale(testx, center=FALSE, scale=gridge$scales)%*% gridge$coef [, 19] +

mean(meatspec$fat[1:172])

rmse (ytpredg, meatspec$fat[173:215])


# 3. Principal Components Regression 主成分分析回归模型代写

# Now let’s compute the PCA on the training sample predictors:

library (mva)

meatpca <-prcomp(meatspec[1:172, -101])


#We can examine the square roots of the eigenvalues:

round (meatpca$sdev, 3)



#The eigenvectors can be found in the object meatpca$rotation

matplot (1:100, meatpca$rot[,1:3] , type="l", xlab="Frequency", ylab="")


#We can get the PCs themselves from the columns of the object meatpca$x. Let's use

#the first four PCs to predict the response:

mt.pcr <- lm (fat ~ meatpca$x [,1:4], meatspec [1:172,])

rmse(mt.pcr$fit, meatspec$fat [1:172])


#We do not expect as good a fit using only four variables instead of the 100. Even so,

#considering that, the fit is not much worse than the much bigger models.


#PCR is an example of shrinkage estimation. Let's see where the name comes from.

#We plot the 100 slope coefficients for the full least squares fit:


plot(mt.lm$coef,ylab="Coefficient")


#We see that the coefficients range is in the

#thousands and that the adjacent coefficients can be very different.


#The PCR model is y=Zb+e which is y=XUb+e We compute Ub and plot it

svb <- meatpca$rot [,1:4] %*% mt.pcr$coef[-1]

plot (svb, ylab="Coefficient")


#Why use four PCs here?

plot(meatpca$sdev[1:10],type="l",ylab="SD of PC",xlab="PC number")


#Now let's see how well the test sample

#is predicted. The default version of PCs used here centers the predictors so we need to

#impose the same centering (using the means of the training sample) on the predictors


tx <- as.matrix (sweep (meatspec[173:215,-101], 2, mm[-101]))

nx <- tx %*% meatpca$rot[,1:4]

pv <- cbind (1, nx) %*% mode13$coef

rmse (pv, meatspec$fat [173:215])



#It turns out that we can do better by using more PCs¡ªwe

#figure out how many would give the best result on the test sample:

rmsmeat <- numeric(50)

for (i in 1:50) {

nx <- tx %*% meatpca$rot[,1:i]

mode13 <- lm (fat~meatpca$x[,1:i] , meatspec[1:172,])

pv <- cbind (1, nx) %*% mode13$coef

rmsmeat[i] <- rmse(pv, meatspec$fat[173:215] )

}

plot (rmsmeat, ylab="Test RMS")

which.min (rmsmeat)

min (rmsmeat)



#Of course, in practice we would

#not have access to the test sample in advance and so we would not know to use 27

#components. We could, of course, reserve part of our original dataset for testing. This is

#sometimes called a validation sample. This is a reasonable strategy, but the downside is

#that we lose this sample from our estimation which degrades its quality. Furthermore,

#there is the question of which and how many observations should go into the validation

#sample. We can avoid this dilemma with the use of crossvalidation (CV).

#The pls.pcr package can compute this CV.




# 4. Partial least square 偏最小二乘模型建立,专业数据分析代写

library(pls)

plsg <-plsr(fat~.-1,data=mtc,50,validation="LOO")

summary(plsg)

#The validation results here are root mean squared error of prediction (RMSEP).

#CV is the ordinary CV estimate, and adjCV is a bias-corrected

#CV estimate (Mevik and Cederkvist 2004) (For a LOO CV, there is virtually no difference).


#It is often simpler to judge the RMSEPs by plotting them:

plot(RMSEP(plsg),legendpos="topright")


#we need around 14 components as suggested by the

#crossvalidated estimate of the RMSEP

plsg2 <-plsr(fat~.,data=mtc,14,validation="LOO")

plot(plsg2, ncomp =14, asp = 1, line = TRUE)


ypred<-predict(plsg2, ncomp = 14, newdata = mtc)

rmse (ypred,yc)


ytpred<-predict(plsg2, ncomp = 14, newdata = mtc2)

rmse (ytpred,yt)




#comparison between with RR 与岭回归结果进行对比


c(ytpredg[13], ytpred[13]+ mean(meatspec$fat[1:172]), meatspec$fat [172+13] )


#The PLS prediction (second) is close to the truth (third), but the ridge prediction is bad.

If

#we remove this case:

rmse (ytpredg[-13], meatspec$fat[173:215] [-13])


#now RR is better than PLS

需要rcode代写,R语言作业代写的同学,欢迎随时联系我们,您的问题都将在这里解决

------分隔线----------------------------

ESSAY PhD擅长于各类学科写作,为全球留学生提供优质论文代写服务。
全PhD团队为您提供最优质的论文代写服务。我们团队的PhD几乎涵盖各个学科,覆盖全球各个TOP大学。我们将根据作业的要求,为您亲手制定唯一的原创论文。提供TURNITIN和WRITECHECK抄袭检测报告。ESSAY PhD提供一对一指导交流服务,如果您对论文有任何不懂,可以直接和PhD写手联系。我们提供辅导服务,直到您理解掌握为止。我们承诺,用最大的努力满足您对学习的要求。

Terms & Conditions    隐私保护    修改条款    Fair Use Policy

Copyright©2015-2018 ESSAY PhD All Rights Reserved   工作时间:7X24小时全天候在线为你提供服务   工作邮箱:essayphd@yahoo.com   客服QQ:981468205